00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 975 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3637 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.096 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.096 The recommended git tool is: git 00:00:00.097 using credential 00000000-0000-0000-0000-000000000002 00:00:00.098 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.145 Fetching changes from the remote Git repository 00:00:00.147 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.199 Using shallow fetch with depth 1 00:00:00.199 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.199 > git --version # timeout=10 00:00:00.249 > git --version # 'git version 2.39.2' 00:00:00.249 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.290 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.290 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.817 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.828 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.840 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.840 > git config core.sparsecheckout # timeout=10 00:00:05.851 > git read-tree -mu HEAD # timeout=10 00:00:05.866 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.882 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.882 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.989 [Pipeline] Start of Pipeline 00:00:06.003 [Pipeline] library 00:00:06.005 Loading library shm_lib@master 00:00:06.005 Library shm_lib@master is cached. Copying from home. 00:00:06.017 [Pipeline] node 00:00:06.032 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.034 [Pipeline] { 00:00:06.040 [Pipeline] catchError 00:00:06.042 [Pipeline] { 00:00:06.054 [Pipeline] wrap 00:00:06.063 [Pipeline] { 00:00:06.072 [Pipeline] stage 00:00:06.074 [Pipeline] { (Prologue) 00:00:06.094 [Pipeline] echo 00:00:06.095 Node: VM-host-SM38 00:00:06.101 [Pipeline] cleanWs 00:00:06.114 [WS-CLEANUP] Deleting project workspace... 00:00:06.114 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.121 [WS-CLEANUP] done 00:00:06.311 [Pipeline] setCustomBuildProperty 00:00:06.379 [Pipeline] httpRequest 00:00:06.954 [Pipeline] echo 00:00:06.955 Sorcerer 10.211.164.20 is alive 00:00:06.962 [Pipeline] retry 00:00:06.963 [Pipeline] { 00:00:06.977 [Pipeline] httpRequest 00:00:06.981 HttpMethod: GET 00:00:06.982 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.982 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.984 Response Code: HTTP/1.1 200 OK 00:00:06.984 Success: Status code 200 is in the accepted range: 200,404 00:00:06.985 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.842 [Pipeline] } 00:00:07.856 [Pipeline] // retry 00:00:07.861 [Pipeline] sh 00:00:08.141 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.159 [Pipeline] httpRequest 00:00:08.541 [Pipeline] echo 00:00:08.543 Sorcerer 10.211.164.20 is alive 00:00:08.551 [Pipeline] retry 00:00:08.552 [Pipeline] { 00:00:08.566 [Pipeline] httpRequest 00:00:08.572 HttpMethod: GET 00:00:08.573 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:08.574 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:08.575 Response Code: HTTP/1.1 200 OK 00:00:08.576 Success: Status code 200 is in the accepted range: 200,404 00:00:08.576 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:28.684 [Pipeline] } 00:00:28.701 [Pipeline] // retry 00:00:28.708 [Pipeline] sh 00:00:28.995 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:31.598 [Pipeline] sh 00:00:31.884 + git -C spdk log --oneline -n5 00:00:31.884 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:31.884 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:31.884 4bcab9fb9 correct kick for CQ full case 00:00:31.884 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:31.884 318515b44 nvme/perf: interrupt mode support for pcie controller 00:00:31.925 [Pipeline] withCredentials 00:00:31.934 > git --version # timeout=10 00:00:31.945 > git --version # 'git version 2.39.2' 00:00:31.961 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:31.963 [Pipeline] { 00:00:31.969 [Pipeline] retry 00:00:31.971 [Pipeline] { 00:00:31.980 [Pipeline] sh 00:00:32.262 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:32.537 [Pipeline] } 00:00:32.555 [Pipeline] // retry 00:00:32.560 [Pipeline] } 00:00:32.576 [Pipeline] // withCredentials 00:00:32.586 [Pipeline] httpRequest 00:00:33.032 [Pipeline] echo 00:00:33.034 Sorcerer 10.211.164.20 is alive 00:00:33.043 [Pipeline] retry 00:00:33.045 [Pipeline] { 00:00:33.059 [Pipeline] httpRequest 00:00:33.065 HttpMethod: GET 00:00:33.066 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:33.066 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:33.070 Response Code: HTTP/1.1 200 OK 00:00:33.071 Success: Status code 200 is in the accepted range: 200,404 00:00:33.072 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:42.235 [Pipeline] } 00:01:42.255 [Pipeline] // retry 00:01:42.265 [Pipeline] sh 00:01:42.547 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:44.476 [Pipeline] sh 00:01:44.760 + git -C dpdk log --oneline -n5 00:01:44.760 eeb0605f11 version: 23.11.0 00:01:44.760 238778122a doc: update release notes for 23.11 00:01:44.760 46aa6b3cfc doc: fix description of RSS features 00:01:44.760 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:44.760 7e421ae345 devtools: support skipping forbid rule check 00:01:44.780 [Pipeline] writeFile 00:01:44.797 [Pipeline] sh 00:01:45.084 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:45.097 [Pipeline] sh 00:01:45.380 + cat autorun-spdk.conf 00:01:45.380 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.380 SPDK_TEST_NVME=1 00:01:45.380 SPDK_TEST_FTL=1 00:01:45.380 SPDK_TEST_ISAL=1 00:01:45.380 SPDK_RUN_ASAN=1 00:01:45.380 SPDK_RUN_UBSAN=1 00:01:45.380 SPDK_TEST_XNVME=1 00:01:45.380 SPDK_TEST_NVME_FDP=1 00:01:45.380 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.380 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:45.380 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:45.390 RUN_NIGHTLY=1 00:01:45.392 [Pipeline] } 00:01:45.406 [Pipeline] // stage 00:01:45.420 [Pipeline] stage 00:01:45.422 [Pipeline] { (Run VM) 00:01:45.434 [Pipeline] sh 00:01:45.801 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:45.801 + echo 'Start stage prepare_nvme.sh' 00:01:45.801 Start stage prepare_nvme.sh 00:01:45.802 + [[ -n 7 ]] 00:01:45.802 + disk_prefix=ex7 00:01:45.802 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:45.802 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:45.802 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:45.802 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.802 ++ SPDK_TEST_NVME=1 00:01:45.802 ++ SPDK_TEST_FTL=1 00:01:45.802 ++ SPDK_TEST_ISAL=1 00:01:45.802 ++ SPDK_RUN_ASAN=1 00:01:45.802 ++ SPDK_RUN_UBSAN=1 00:01:45.802 ++ SPDK_TEST_XNVME=1 00:01:45.802 ++ SPDK_TEST_NVME_FDP=1 00:01:45.802 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.802 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:45.802 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:45.802 ++ RUN_NIGHTLY=1 00:01:45.802 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:45.802 + nvme_files=() 00:01:45.802 + declare -A nvme_files 00:01:45.802 + backend_dir=/var/lib/libvirt/images/backends 00:01:45.802 + nvme_files['nvme.img']=5G 00:01:45.802 + nvme_files['nvme-cmb.img']=5G 00:01:45.802 + nvme_files['nvme-multi0.img']=4G 00:01:45.802 + nvme_files['nvme-multi1.img']=4G 00:01:45.802 + nvme_files['nvme-multi2.img']=4G 00:01:45.802 + nvme_files['nvme-openstack.img']=8G 00:01:45.802 + nvme_files['nvme-zns.img']=5G 00:01:45.802 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:45.802 + (( SPDK_TEST_FTL == 1 )) 00:01:45.802 + nvme_files["nvme-ftl.img"]=6G 00:01:45.802 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:45.802 + nvme_files["nvme-fdp.img"]=1G 00:01:45.802 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:45.802 + for nvme in "${!nvme_files[@]}" 00:01:45.802 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:45.802 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:45.802 + for nvme in "${!nvme_files[@]}" 00:01:45.802 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:01:45.802 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:45.802 + for nvme in "${!nvme_files[@]}" 00:01:45.802 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:45.802 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:45.802 + for nvme in "${!nvme_files[@]}" 00:01:45.802 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:45.802 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:45.802 + for nvme in "${!nvme_files[@]}" 00:01:45.802 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:45.802 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:45.802 + for nvme in "${!nvme_files[@]}" 00:01:45.802 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:45.802 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:45.802 + for nvme in "${!nvme_files[@]}" 00:01:45.802 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:46.062 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.062 + for nvme in "${!nvme_files[@]}" 00:01:46.062 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:01:46.062 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:46.062 + for nvme in "${!nvme_files[@]}" 00:01:46.062 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:46.062 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.062 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:46.062 + echo 'End stage prepare_nvme.sh' 00:01:46.062 End stage prepare_nvme.sh 00:01:46.076 [Pipeline] sh 00:01:46.365 + DISTRO=fedora39 00:01:46.365 + CPUS=10 00:01:46.365 + RAM=12288 00:01:46.365 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:46.365 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:46.365 00:01:46.365 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:46.365 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:46.365 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:46.365 HELP=0 00:01:46.365 DRY_RUN=0 00:01:46.365 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:01:46.365 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:46.365 NVME_AUTO_CREATE=0 00:01:46.365 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:01:46.365 NVME_CMB=,,,, 00:01:46.365 NVME_PMR=,,,, 00:01:46.365 NVME_ZNS=,,,, 00:01:46.365 NVME_MS=true,,,, 00:01:46.365 NVME_FDP=,,,on, 00:01:46.365 SPDK_VAGRANT_DISTRO=fedora39 00:01:46.365 SPDK_VAGRANT_VMCPU=10 00:01:46.365 SPDK_VAGRANT_VMRAM=12288 00:01:46.365 SPDK_VAGRANT_PROVIDER=libvirt 00:01:46.365 SPDK_VAGRANT_HTTP_PROXY= 00:01:46.365 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:46.365 SPDK_OPENSTACK_NETWORK=0 00:01:46.365 VAGRANT_PACKAGE_BOX=0 00:01:46.365 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:46.365 FORCE_DISTRO=true 00:01:46.365 VAGRANT_BOX_VERSION= 00:01:46.365 EXTRA_VAGRANTFILES= 00:01:46.365 NIC_MODEL=e1000 00:01:46.365 00:01:46.365 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:46.365 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:48.915 Bringing machine 'default' up with 'libvirt' provider... 00:01:49.176 ==> default: Creating image (snapshot of base box volume). 00:01:49.436 ==> default: Creating domain with the following settings... 00:01:49.436 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731816454_ae6c0d5ef32b592ed1b6 00:01:49.436 ==> default: -- Domain type: kvm 00:01:49.436 ==> default: -- Cpus: 10 00:01:49.436 ==> default: -- Feature: acpi 00:01:49.436 ==> default: -- Feature: apic 00:01:49.436 ==> default: -- Feature: pae 00:01:49.436 ==> default: -- Memory: 12288M 00:01:49.436 ==> default: -- Memory Backing: hugepages: 00:01:49.436 ==> default: -- Management MAC: 00:01:49.436 ==> default: -- Loader: 00:01:49.436 ==> default: -- Nvram: 00:01:49.436 ==> default: -- Base box: spdk/fedora39 00:01:49.436 ==> default: -- Storage pool: default 00:01:49.436 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731816454_ae6c0d5ef32b592ed1b6.img (20G) 00:01:49.436 ==> default: -- Volume Cache: default 00:01:49.436 ==> default: -- Kernel: 00:01:49.436 ==> default: -- Initrd: 00:01:49.436 ==> default: -- Graphics Type: vnc 00:01:49.436 ==> default: -- Graphics Port: -1 00:01:49.436 ==> default: -- Graphics IP: 127.0.0.1 00:01:49.436 ==> default: -- Graphics Password: Not defined 00:01:49.436 ==> default: -- Video Type: cirrus 00:01:49.436 ==> default: -- Video VRAM: 9216 00:01:49.436 ==> default: -- Sound Type: 00:01:49.436 ==> default: -- Keymap: en-us 00:01:49.436 ==> default: -- TPM Path: 00:01:49.436 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:49.436 ==> default: -- Command line args: 00:01:49.436 ==> default: -> value=-device, 00:01:49.436 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:49.436 ==> default: -> value=-drive, 00:01:49.436 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:49.436 ==> default: -> value=-device, 00:01:49.436 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:49.436 ==> default: -> value=-device, 00:01:49.436 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:49.436 ==> default: -> value=-drive, 00:01:49.436 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:01:49.436 ==> default: -> value=-device, 00:01:49.436 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:49.436 ==> default: -> value=-device, 00:01:49.436 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:49.436 ==> default: -> value=-drive, 00:01:49.436 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:49.436 ==> default: -> value=-device, 00:01:49.436 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:49.436 ==> default: -> value=-drive, 00:01:49.436 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:49.436 ==> default: -> value=-device, 00:01:49.436 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:49.436 ==> default: -> value=-drive, 00:01:49.436 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:49.436 ==> default: -> value=-device, 00:01:49.436 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:49.436 ==> default: -> value=-device, 00:01:49.436 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:49.436 ==> default: -> value=-device, 00:01:49.436 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:49.436 ==> default: -> value=-drive, 00:01:49.436 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:49.436 ==> default: -> value=-device, 00:01:49.436 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:49.694 ==> default: Creating shared folders metadata... 00:01:49.694 ==> default: Starting domain. 00:01:51.606 ==> default: Waiting for domain to get an IP address... 00:02:13.563 ==> default: Waiting for SSH to become available... 00:02:13.563 ==> default: Configuring and enabling network interfaces... 00:02:14.949 default: SSH address: 192.168.121.37:22 00:02:14.949 default: SSH username: vagrant 00:02:14.949 default: SSH auth method: private key 00:02:17.520 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:25.663 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:30.983 ==> default: Mounting SSHFS shared folder... 00:02:32.897 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:32.897 ==> default: Checking Mount.. 00:02:33.840 ==> default: Folder Successfully Mounted! 00:02:33.840 00:02:33.840 SUCCESS! 00:02:33.840 00:02:33.840 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:33.840 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:33.840 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:33.840 00:02:33.851 [Pipeline] } 00:02:33.866 [Pipeline] // stage 00:02:33.875 [Pipeline] dir 00:02:33.876 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:33.878 [Pipeline] { 00:02:33.890 [Pipeline] catchError 00:02:33.892 [Pipeline] { 00:02:33.904 [Pipeline] sh 00:02:34.188 + vagrant ssh-config --host vagrant 00:02:34.188 + sed -ne '/^Host/,$p' 00:02:34.188 + tee ssh_conf 00:02:37.561 Host vagrant 00:02:37.561 HostName 192.168.121.37 00:02:37.561 User vagrant 00:02:37.561 Port 22 00:02:37.561 UserKnownHostsFile /dev/null 00:02:37.561 StrictHostKeyChecking no 00:02:37.561 PasswordAuthentication no 00:02:37.561 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:37.561 IdentitiesOnly yes 00:02:37.561 LogLevel FATAL 00:02:37.561 ForwardAgent yes 00:02:37.561 ForwardX11 yes 00:02:37.561 00:02:37.577 [Pipeline] withEnv 00:02:37.580 [Pipeline] { 00:02:37.596 [Pipeline] sh 00:02:37.881 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:37.881 source /etc/os-release 00:02:37.881 [[ -e /image.version ]] && img=$(< /image.version) 00:02:37.881 # Minimal, systemd-like check. 00:02:37.881 if [[ -e /.dockerenv ]]; then 00:02:37.881 # Clear garbage from the node'\''s name: 00:02:37.881 # agt-er_autotest_547-896 -> autotest_547-896 00:02:37.881 # $HOSTNAME is the actual container id 00:02:37.881 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:37.881 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:37.881 # We can assume this is a mount from a host where container is running, 00:02:37.881 # so fetch its hostname to easily identify the target swarm worker. 00:02:37.881 container="$(< /etc/hostname) ($agent)" 00:02:37.881 else 00:02:37.881 # Fallback 00:02:37.881 container=$agent 00:02:37.881 fi 00:02:37.881 fi 00:02:37.881 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:37.881 ' 00:02:38.155 [Pipeline] } 00:02:38.172 [Pipeline] // withEnv 00:02:38.181 [Pipeline] setCustomBuildProperty 00:02:38.196 [Pipeline] stage 00:02:38.198 [Pipeline] { (Tests) 00:02:38.215 [Pipeline] sh 00:02:38.500 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:38.775 [Pipeline] sh 00:02:39.060 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:39.337 [Pipeline] timeout 00:02:39.338 Timeout set to expire in 50 min 00:02:39.340 [Pipeline] { 00:02:39.355 [Pipeline] sh 00:02:39.638 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:40.209 HEAD is now at 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:02:40.222 [Pipeline] sh 00:02:40.505 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:40.781 [Pipeline] sh 00:02:41.063 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:41.349 [Pipeline] sh 00:02:41.633 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:41.893 ++ readlink -f spdk_repo 00:02:41.893 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:41.893 + [[ -n /home/vagrant/spdk_repo ]] 00:02:41.893 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:41.893 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:41.893 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:41.893 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:41.893 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:41.893 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:41.893 + cd /home/vagrant/spdk_repo 00:02:41.893 + source /etc/os-release 00:02:41.893 ++ NAME='Fedora Linux' 00:02:41.893 ++ VERSION='39 (Cloud Edition)' 00:02:41.893 ++ ID=fedora 00:02:41.893 ++ VERSION_ID=39 00:02:41.893 ++ VERSION_CODENAME= 00:02:41.893 ++ PLATFORM_ID=platform:f39 00:02:41.893 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:41.893 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:41.893 ++ LOGO=fedora-logo-icon 00:02:41.893 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:41.893 ++ HOME_URL=https://fedoraproject.org/ 00:02:41.893 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:41.893 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:41.893 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:41.893 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:41.893 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:41.893 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:41.893 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:41.893 ++ SUPPORT_END=2024-11-12 00:02:41.893 ++ VARIANT='Cloud Edition' 00:02:41.893 ++ VARIANT_ID=cloud 00:02:41.893 + uname -a 00:02:41.893 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:41.893 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:42.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:42.414 Hugepages 00:02:42.414 node hugesize free / total 00:02:42.414 node0 1048576kB 0 / 0 00:02:42.414 node0 2048kB 0 / 0 00:02:42.414 00:02:42.414 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:42.414 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:42.414 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:42.414 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:42.414 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:42.414 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:42.414 + rm -f /tmp/spdk-ld-path 00:02:42.414 + source autorun-spdk.conf 00:02:42.414 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:42.414 ++ SPDK_TEST_NVME=1 00:02:42.414 ++ SPDK_TEST_FTL=1 00:02:42.414 ++ SPDK_TEST_ISAL=1 00:02:42.414 ++ SPDK_RUN_ASAN=1 00:02:42.414 ++ SPDK_RUN_UBSAN=1 00:02:42.414 ++ SPDK_TEST_XNVME=1 00:02:42.414 ++ SPDK_TEST_NVME_FDP=1 00:02:42.414 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:42.414 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:42.414 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:42.414 ++ RUN_NIGHTLY=1 00:02:42.414 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:42.414 + [[ -n '' ]] 00:02:42.414 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:42.675 + for M in /var/spdk/build-*-manifest.txt 00:02:42.675 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:42.675 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:42.675 + for M in /var/spdk/build-*-manifest.txt 00:02:42.675 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:42.675 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:42.675 + for M in /var/spdk/build-*-manifest.txt 00:02:42.675 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:42.675 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:42.675 ++ uname 00:02:42.675 + [[ Linux == \L\i\n\u\x ]] 00:02:42.675 + sudo dmesg -T 00:02:42.675 + sudo dmesg --clear 00:02:42.675 + dmesg_pid=5768 00:02:42.675 + [[ Fedora Linux == FreeBSD ]] 00:02:42.675 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:42.675 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:42.675 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:42.675 + [[ -x /usr/src/fio-static/fio ]] 00:02:42.675 + sudo dmesg -Tw 00:02:42.675 + export FIO_BIN=/usr/src/fio-static/fio 00:02:42.675 + FIO_BIN=/usr/src/fio-static/fio 00:02:42.675 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:42.675 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:42.675 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:42.675 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:42.675 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:42.675 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:42.675 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:42.675 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:42.675 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:42.675 04:08:28 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:42.675 04:08:28 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:42.675 04:08:28 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:42.675 04:08:28 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:42.675 04:08:28 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:42.675 04:08:28 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:42.675 04:08:28 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:42.675 04:08:28 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:42.675 04:08:28 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:42.675 04:08:28 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:42.675 04:08:28 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:42.675 04:08:28 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:42.675 04:08:28 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:42.675 04:08:28 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:42.675 04:08:28 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:42.675 04:08:28 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:42.675 04:08:28 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:42.675 04:08:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:42.675 04:08:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:42.675 04:08:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:42.675 04:08:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:42.675 04:08:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:42.675 04:08:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.676 04:08:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.676 04:08:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.676 04:08:28 -- paths/export.sh@5 -- $ export PATH 00:02:42.676 04:08:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.676 04:08:28 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:42.676 04:08:28 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:42.676 04:08:28 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731816508.XXXXXX 00:02:42.937 04:08:28 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731816508.jA9yqf 00:02:42.937 04:08:28 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:42.937 04:08:28 -- common/autobuild_common.sh@492 -- $ '[' -n v23.11 ']' 00:02:42.937 04:08:28 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:42.937 04:08:28 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:42.937 04:08:28 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:42.937 04:08:28 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:42.937 04:08:28 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:42.938 04:08:28 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:42.938 04:08:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:42.938 04:08:28 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:42.938 04:08:28 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:42.938 04:08:28 -- pm/common@17 -- $ local monitor 00:02:42.938 04:08:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.938 04:08:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.938 04:08:28 -- pm/common@25 -- $ sleep 1 00:02:42.938 04:08:28 -- pm/common@21 -- $ date +%s 00:02:42.938 04:08:28 -- pm/common@21 -- $ date +%s 00:02:42.938 04:08:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731816508 00:02:42.938 04:08:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731816508 00:02:42.938 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731816508_collect-cpu-load.pm.log 00:02:42.938 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731816508_collect-vmstat.pm.log 00:02:43.879 04:08:29 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:43.879 04:08:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:43.879 04:08:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:43.879 04:08:29 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:43.879 04:08:29 -- spdk/autobuild.sh@16 -- $ date -u 00:02:43.879 Sun Nov 17 04:08:29 AM UTC 2024 00:02:43.879 04:08:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:43.879 v25.01-pre-189-g83e8405e4 00:02:43.879 04:08:29 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:43.879 04:08:29 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:43.879 04:08:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:43.879 04:08:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:43.879 04:08:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.879 ************************************ 00:02:43.879 START TEST asan 00:02:43.879 ************************************ 00:02:43.879 using asan 00:02:43.879 04:08:29 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:43.879 00:02:43.879 real 0m0.000s 00:02:43.879 user 0m0.000s 00:02:43.879 sys 0m0.000s 00:02:43.879 04:08:29 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:43.879 ************************************ 00:02:43.879 END TEST asan 00:02:43.879 ************************************ 00:02:43.879 04:08:29 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:43.879 04:08:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:43.879 04:08:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:43.879 04:08:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:43.879 04:08:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:43.879 04:08:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.879 ************************************ 00:02:43.879 START TEST ubsan 00:02:43.879 ************************************ 00:02:43.879 using ubsan 00:02:43.879 04:08:29 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:43.879 00:02:43.879 real 0m0.000s 00:02:43.879 user 0m0.000s 00:02:43.879 sys 0m0.000s 00:02:43.879 04:08:29 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:43.879 ************************************ 00:02:43.879 END TEST ubsan 00:02:43.879 ************************************ 00:02:43.879 04:08:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:43.879 04:08:29 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:43.879 04:08:29 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:43.879 04:08:29 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:43.879 04:08:29 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:43.879 04:08:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:43.879 04:08:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.879 ************************************ 00:02:43.879 START TEST build_native_dpdk 00:02:43.879 ************************************ 00:02:43.879 04:08:29 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:43.879 04:08:29 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:44.141 eeb0605f11 version: 23.11.0 00:02:44.141 238778122a doc: update release notes for 23.11 00:02:44.141 46aa6b3cfc doc: fix description of RSS features 00:02:44.141 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:44.141 7e421ae345 devtools: support skipping forbid rule check 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:44.141 patching file config/rte_config.h 00:02:44.141 Hunk #1 succeeded at 60 (offset 1 line). 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:44.141 patching file lib/pcapng/rte_pcapng.c 00:02:44.141 04:08:29 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:44.141 04:08:29 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:44.142 04:08:29 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:44.142 04:08:29 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:44.142 04:08:29 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:44.142 04:08:29 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:44.142 04:08:29 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:44.142 04:08:29 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:48.348 The Meson build system 00:02:48.349 Version: 1.5.0 00:02:48.349 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:48.349 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:48.349 Build type: native build 00:02:48.349 Program cat found: YES (/usr/bin/cat) 00:02:48.349 Project name: DPDK 00:02:48.349 Project version: 23.11.0 00:02:48.349 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:48.349 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:48.349 Host machine cpu family: x86_64 00:02:48.349 Host machine cpu: x86_64 00:02:48.349 Message: ## Building in Developer Mode ## 00:02:48.349 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:48.349 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:48.349 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:48.349 Program python3 found: YES (/usr/bin/python3) 00:02:48.349 Program cat found: YES (/usr/bin/cat) 00:02:48.349 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:48.349 Compiler for C supports arguments -march=native: YES 00:02:48.349 Checking for size of "void *" : 8 00:02:48.349 Checking for size of "void *" : 8 (cached) 00:02:48.349 Library m found: YES 00:02:48.349 Library numa found: YES 00:02:48.349 Has header "numaif.h" : YES 00:02:48.349 Library fdt found: NO 00:02:48.349 Library execinfo found: NO 00:02:48.349 Has header "execinfo.h" : YES 00:02:48.349 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:48.349 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:48.349 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:48.349 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:48.349 Run-time dependency openssl found: YES 3.1.1 00:02:48.349 Run-time dependency libpcap found: YES 1.10.4 00:02:48.349 Has header "pcap.h" with dependency libpcap: YES 00:02:48.349 Compiler for C supports arguments -Wcast-qual: YES 00:02:48.349 Compiler for C supports arguments -Wdeprecated: YES 00:02:48.349 Compiler for C supports arguments -Wformat: YES 00:02:48.349 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:48.349 Compiler for C supports arguments -Wformat-security: NO 00:02:48.349 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:48.349 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:48.349 Compiler for C supports arguments -Wnested-externs: YES 00:02:48.349 Compiler for C supports arguments -Wold-style-definition: YES 00:02:48.349 Compiler for C supports arguments -Wpointer-arith: YES 00:02:48.349 Compiler for C supports arguments -Wsign-compare: YES 00:02:48.349 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:48.349 Compiler for C supports arguments -Wundef: YES 00:02:48.349 Compiler for C supports arguments -Wwrite-strings: YES 00:02:48.349 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:48.349 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:48.349 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:48.349 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:48.349 Program objdump found: YES (/usr/bin/objdump) 00:02:48.349 Compiler for C supports arguments -mavx512f: YES 00:02:48.349 Checking if "AVX512 checking" compiles: YES 00:02:48.349 Fetching value of define "__SSE4_2__" : 1 00:02:48.349 Fetching value of define "__AES__" : 1 00:02:48.349 Fetching value of define "__AVX__" : 1 00:02:48.349 Fetching value of define "__AVX2__" : 1 00:02:48.349 Fetching value of define "__AVX512BW__" : 1 00:02:48.349 Fetching value of define "__AVX512CD__" : 1 00:02:48.349 Fetching value of define "__AVX512DQ__" : 1 00:02:48.349 Fetching value of define "__AVX512F__" : 1 00:02:48.349 Fetching value of define "__AVX512VL__" : 1 00:02:48.349 Fetching value of define "__PCLMUL__" : 1 00:02:48.349 Fetching value of define "__RDRND__" : 1 00:02:48.349 Fetching value of define "__RDSEED__" : 1 00:02:48.349 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:48.349 Fetching value of define "__znver1__" : (undefined) 00:02:48.349 Fetching value of define "__znver2__" : (undefined) 00:02:48.349 Fetching value of define "__znver3__" : (undefined) 00:02:48.349 Fetching value of define "__znver4__" : (undefined) 00:02:48.349 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:48.349 Message: lib/log: Defining dependency "log" 00:02:48.349 Message: lib/kvargs: Defining dependency "kvargs" 00:02:48.349 Message: lib/telemetry: Defining dependency "telemetry" 00:02:48.349 Checking for function "getentropy" : NO 00:02:48.349 Message: lib/eal: Defining dependency "eal" 00:02:48.349 Message: lib/ring: Defining dependency "ring" 00:02:48.349 Message: lib/rcu: Defining dependency "rcu" 00:02:48.349 Message: lib/mempool: Defining dependency "mempool" 00:02:48.349 Message: lib/mbuf: Defining dependency "mbuf" 00:02:48.349 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:48.349 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:48.349 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:48.349 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:48.349 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:48.349 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:48.349 Compiler for C supports arguments -mpclmul: YES 00:02:48.349 Compiler for C supports arguments -maes: YES 00:02:48.349 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:48.349 Compiler for C supports arguments -mavx512bw: YES 00:02:48.349 Compiler for C supports arguments -mavx512dq: YES 00:02:48.349 Compiler for C supports arguments -mavx512vl: YES 00:02:48.349 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:48.349 Compiler for C supports arguments -mavx2: YES 00:02:48.349 Compiler for C supports arguments -mavx: YES 00:02:48.349 Message: lib/net: Defining dependency "net" 00:02:48.349 Message: lib/meter: Defining dependency "meter" 00:02:48.349 Message: lib/ethdev: Defining dependency "ethdev" 00:02:48.349 Message: lib/pci: Defining dependency "pci" 00:02:48.349 Message: lib/cmdline: Defining dependency "cmdline" 00:02:48.349 Message: lib/metrics: Defining dependency "metrics" 00:02:48.349 Message: lib/hash: Defining dependency "hash" 00:02:48.349 Message: lib/timer: Defining dependency "timer" 00:02:48.349 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:48.349 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:48.349 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:48.349 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:48.349 Message: lib/acl: Defining dependency "acl" 00:02:48.349 Message: lib/bbdev: Defining dependency "bbdev" 00:02:48.349 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:48.349 Run-time dependency libelf found: YES 0.191 00:02:48.349 Message: lib/bpf: Defining dependency "bpf" 00:02:48.349 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:48.349 Message: lib/compressdev: Defining dependency "compressdev" 00:02:48.349 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:48.349 Message: lib/distributor: Defining dependency "distributor" 00:02:48.349 Message: lib/dmadev: Defining dependency "dmadev" 00:02:48.349 Message: lib/efd: Defining dependency "efd" 00:02:48.349 Message: lib/eventdev: Defining dependency "eventdev" 00:02:48.349 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:48.349 Message: lib/gpudev: Defining dependency "gpudev" 00:02:48.349 Message: lib/gro: Defining dependency "gro" 00:02:48.349 Message: lib/gso: Defining dependency "gso" 00:02:48.349 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:48.349 Message: lib/jobstats: Defining dependency "jobstats" 00:02:48.349 Message: lib/latencystats: Defining dependency "latencystats" 00:02:48.349 Message: lib/lpm: Defining dependency "lpm" 00:02:48.349 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:48.349 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:48.349 Fetching value of define "__AVX512IFMA__" : 1 00:02:48.349 Message: lib/member: Defining dependency "member" 00:02:48.349 Message: lib/pcapng: Defining dependency "pcapng" 00:02:48.349 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:48.349 Message: lib/power: Defining dependency "power" 00:02:48.349 Message: lib/rawdev: Defining dependency "rawdev" 00:02:48.349 Message: lib/regexdev: Defining dependency "regexdev" 00:02:48.349 Message: lib/mldev: Defining dependency "mldev" 00:02:48.349 Message: lib/rib: Defining dependency "rib" 00:02:48.349 Message: lib/reorder: Defining dependency "reorder" 00:02:48.349 Message: lib/sched: Defining dependency "sched" 00:02:48.349 Message: lib/security: Defining dependency "security" 00:02:48.349 Message: lib/stack: Defining dependency "stack" 00:02:48.349 Has header "linux/userfaultfd.h" : YES 00:02:48.349 Has header "linux/vduse.h" : YES 00:02:48.349 Message: lib/vhost: Defining dependency "vhost" 00:02:48.349 Message: lib/ipsec: Defining dependency "ipsec" 00:02:48.349 Message: lib/pdcp: Defining dependency "pdcp" 00:02:48.349 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:48.349 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:48.349 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:48.349 Message: lib/fib: Defining dependency "fib" 00:02:48.349 Message: lib/port: Defining dependency "port" 00:02:48.349 Message: lib/pdump: Defining dependency "pdump" 00:02:48.349 Message: lib/table: Defining dependency "table" 00:02:48.349 Message: lib/pipeline: Defining dependency "pipeline" 00:02:48.349 Message: lib/graph: Defining dependency "graph" 00:02:48.349 Message: lib/node: Defining dependency "node" 00:02:48.349 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:48.349 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:48.349 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:48.349 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:49.736 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:49.736 Compiler for C supports arguments -Wno-unused-value: YES 00:02:49.736 Compiler for C supports arguments -Wno-format: YES 00:02:49.736 Compiler for C supports arguments -Wno-format-security: YES 00:02:49.736 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:49.736 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:49.736 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:49.736 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:49.736 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:49.736 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:49.736 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:49.736 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:49.736 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:49.736 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:49.736 Has header "sys/epoll.h" : YES 00:02:49.736 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:49.736 Configuring doxy-api-html.conf using configuration 00:02:49.736 Configuring doxy-api-man.conf using configuration 00:02:49.736 Program mandb found: YES (/usr/bin/mandb) 00:02:49.736 Program sphinx-build found: NO 00:02:49.736 Configuring rte_build_config.h using configuration 00:02:49.736 Message: 00:02:49.736 ================= 00:02:49.736 Applications Enabled 00:02:49.736 ================= 00:02:49.736 00:02:49.736 apps: 00:02:49.736 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:49.736 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:49.736 test-pmd, test-regex, test-sad, test-security-perf, 00:02:49.736 00:02:49.736 Message: 00:02:49.736 ================= 00:02:49.736 Libraries Enabled 00:02:49.736 ================= 00:02:49.736 00:02:49.736 libs: 00:02:49.736 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:49.736 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:49.736 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:49.736 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:49.736 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:49.736 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:49.736 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:49.736 00:02:49.736 00:02:49.736 Message: 00:02:49.736 =============== 00:02:49.736 Drivers Enabled 00:02:49.736 =============== 00:02:49.736 00:02:49.736 common: 00:02:49.736 00:02:49.736 bus: 00:02:49.736 pci, vdev, 00:02:49.736 mempool: 00:02:49.736 ring, 00:02:49.736 dma: 00:02:49.736 00:02:49.736 net: 00:02:49.736 i40e, 00:02:49.736 raw: 00:02:49.736 00:02:49.736 crypto: 00:02:49.736 00:02:49.736 compress: 00:02:49.736 00:02:49.736 regex: 00:02:49.736 00:02:49.736 ml: 00:02:49.736 00:02:49.736 vdpa: 00:02:49.736 00:02:49.736 event: 00:02:49.736 00:02:49.736 baseband: 00:02:49.736 00:02:49.736 gpu: 00:02:49.736 00:02:49.736 00:02:49.736 Message: 00:02:49.736 ================= 00:02:49.736 Content Skipped 00:02:49.736 ================= 00:02:49.736 00:02:49.736 apps: 00:02:49.736 00:02:49.736 libs: 00:02:49.736 00:02:49.736 drivers: 00:02:49.736 common/cpt: not in enabled drivers build config 00:02:49.736 common/dpaax: not in enabled drivers build config 00:02:49.736 common/iavf: not in enabled drivers build config 00:02:49.736 common/idpf: not in enabled drivers build config 00:02:49.736 common/mvep: not in enabled drivers build config 00:02:49.736 common/octeontx: not in enabled drivers build config 00:02:49.736 bus/auxiliary: not in enabled drivers build config 00:02:49.736 bus/cdx: not in enabled drivers build config 00:02:49.736 bus/dpaa: not in enabled drivers build config 00:02:49.736 bus/fslmc: not in enabled drivers build config 00:02:49.736 bus/ifpga: not in enabled drivers build config 00:02:49.737 bus/platform: not in enabled drivers build config 00:02:49.737 bus/vmbus: not in enabled drivers build config 00:02:49.737 common/cnxk: not in enabled drivers build config 00:02:49.737 common/mlx5: not in enabled drivers build config 00:02:49.737 common/nfp: not in enabled drivers build config 00:02:49.737 common/qat: not in enabled drivers build config 00:02:49.737 common/sfc_efx: not in enabled drivers build config 00:02:49.737 mempool/bucket: not in enabled drivers build config 00:02:49.737 mempool/cnxk: not in enabled drivers build config 00:02:49.737 mempool/dpaa: not in enabled drivers build config 00:02:49.737 mempool/dpaa2: not in enabled drivers build config 00:02:49.737 mempool/octeontx: not in enabled drivers build config 00:02:49.737 mempool/stack: not in enabled drivers build config 00:02:49.737 dma/cnxk: not in enabled drivers build config 00:02:49.737 dma/dpaa: not in enabled drivers build config 00:02:49.737 dma/dpaa2: not in enabled drivers build config 00:02:49.737 dma/hisilicon: not in enabled drivers build config 00:02:49.737 dma/idxd: not in enabled drivers build config 00:02:49.737 dma/ioat: not in enabled drivers build config 00:02:49.737 dma/skeleton: not in enabled drivers build config 00:02:49.737 net/af_packet: not in enabled drivers build config 00:02:49.737 net/af_xdp: not in enabled drivers build config 00:02:49.737 net/ark: not in enabled drivers build config 00:02:49.737 net/atlantic: not in enabled drivers build config 00:02:49.737 net/avp: not in enabled drivers build config 00:02:49.737 net/axgbe: not in enabled drivers build config 00:02:49.737 net/bnx2x: not in enabled drivers build config 00:02:49.737 net/bnxt: not in enabled drivers build config 00:02:49.737 net/bonding: not in enabled drivers build config 00:02:49.737 net/cnxk: not in enabled drivers build config 00:02:49.737 net/cpfl: not in enabled drivers build config 00:02:49.737 net/cxgbe: not in enabled drivers build config 00:02:49.737 net/dpaa: not in enabled drivers build config 00:02:49.737 net/dpaa2: not in enabled drivers build config 00:02:49.737 net/e1000: not in enabled drivers build config 00:02:49.737 net/ena: not in enabled drivers build config 00:02:49.737 net/enetc: not in enabled drivers build config 00:02:49.737 net/enetfec: not in enabled drivers build config 00:02:49.737 net/enic: not in enabled drivers build config 00:02:49.737 net/failsafe: not in enabled drivers build config 00:02:49.737 net/fm10k: not in enabled drivers build config 00:02:49.737 net/gve: not in enabled drivers build config 00:02:49.737 net/hinic: not in enabled drivers build config 00:02:49.737 net/hns3: not in enabled drivers build config 00:02:49.737 net/iavf: not in enabled drivers build config 00:02:49.737 net/ice: not in enabled drivers build config 00:02:49.737 net/idpf: not in enabled drivers build config 00:02:49.737 net/igc: not in enabled drivers build config 00:02:49.737 net/ionic: not in enabled drivers build config 00:02:49.737 net/ipn3ke: not in enabled drivers build config 00:02:49.737 net/ixgbe: not in enabled drivers build config 00:02:49.737 net/mana: not in enabled drivers build config 00:02:49.737 net/memif: not in enabled drivers build config 00:02:49.737 net/mlx4: not in enabled drivers build config 00:02:49.737 net/mlx5: not in enabled drivers build config 00:02:49.737 net/mvneta: not in enabled drivers build config 00:02:49.737 net/mvpp2: not in enabled drivers build config 00:02:49.737 net/netvsc: not in enabled drivers build config 00:02:49.737 net/nfb: not in enabled drivers build config 00:02:49.737 net/nfp: not in enabled drivers build config 00:02:49.737 net/ngbe: not in enabled drivers build config 00:02:49.737 net/null: not in enabled drivers build config 00:02:49.737 net/octeontx: not in enabled drivers build config 00:02:49.737 net/octeon_ep: not in enabled drivers build config 00:02:49.737 net/pcap: not in enabled drivers build config 00:02:49.737 net/pfe: not in enabled drivers build config 00:02:49.737 net/qede: not in enabled drivers build config 00:02:49.737 net/ring: not in enabled drivers build config 00:02:49.737 net/sfc: not in enabled drivers build config 00:02:49.737 net/softnic: not in enabled drivers build config 00:02:49.737 net/tap: not in enabled drivers build config 00:02:49.737 net/thunderx: not in enabled drivers build config 00:02:49.737 net/txgbe: not in enabled drivers build config 00:02:49.737 net/vdev_netvsc: not in enabled drivers build config 00:02:49.737 net/vhost: not in enabled drivers build config 00:02:49.737 net/virtio: not in enabled drivers build config 00:02:49.737 net/vmxnet3: not in enabled drivers build config 00:02:49.737 raw/cnxk_bphy: not in enabled drivers build config 00:02:49.737 raw/cnxk_gpio: not in enabled drivers build config 00:02:49.737 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:49.737 raw/ifpga: not in enabled drivers build config 00:02:49.737 raw/ntb: not in enabled drivers build config 00:02:49.737 raw/skeleton: not in enabled drivers build config 00:02:49.737 crypto/armv8: not in enabled drivers build config 00:02:49.737 crypto/bcmfs: not in enabled drivers build config 00:02:49.737 crypto/caam_jr: not in enabled drivers build config 00:02:49.737 crypto/ccp: not in enabled drivers build config 00:02:49.737 crypto/cnxk: not in enabled drivers build config 00:02:49.737 crypto/dpaa_sec: not in enabled drivers build config 00:02:49.737 crypto/dpaa2_sec: not in enabled drivers build config 00:02:49.737 crypto/ipsec_mb: not in enabled drivers build config 00:02:49.737 crypto/mlx5: not in enabled drivers build config 00:02:49.737 crypto/mvsam: not in enabled drivers build config 00:02:49.737 crypto/nitrox: not in enabled drivers build config 00:02:49.737 crypto/null: not in enabled drivers build config 00:02:49.737 crypto/octeontx: not in enabled drivers build config 00:02:49.737 crypto/openssl: not in enabled drivers build config 00:02:49.737 crypto/scheduler: not in enabled drivers build config 00:02:49.737 crypto/uadk: not in enabled drivers build config 00:02:49.737 crypto/virtio: not in enabled drivers build config 00:02:49.737 compress/isal: not in enabled drivers build config 00:02:49.737 compress/mlx5: not in enabled drivers build config 00:02:49.737 compress/octeontx: not in enabled drivers build config 00:02:49.737 compress/zlib: not in enabled drivers build config 00:02:49.737 regex/mlx5: not in enabled drivers build config 00:02:49.737 regex/cn9k: not in enabled drivers build config 00:02:49.737 ml/cnxk: not in enabled drivers build config 00:02:49.737 vdpa/ifc: not in enabled drivers build config 00:02:49.737 vdpa/mlx5: not in enabled drivers build config 00:02:49.737 vdpa/nfp: not in enabled drivers build config 00:02:49.737 vdpa/sfc: not in enabled drivers build config 00:02:49.737 event/cnxk: not in enabled drivers build config 00:02:49.737 event/dlb2: not in enabled drivers build config 00:02:49.737 event/dpaa: not in enabled drivers build config 00:02:49.737 event/dpaa2: not in enabled drivers build config 00:02:49.737 event/dsw: not in enabled drivers build config 00:02:49.737 event/opdl: not in enabled drivers build config 00:02:49.737 event/skeleton: not in enabled drivers build config 00:02:49.737 event/sw: not in enabled drivers build config 00:02:49.737 event/octeontx: not in enabled drivers build config 00:02:49.737 baseband/acc: not in enabled drivers build config 00:02:49.737 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:49.737 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:49.737 baseband/la12xx: not in enabled drivers build config 00:02:49.737 baseband/null: not in enabled drivers build config 00:02:49.737 baseband/turbo_sw: not in enabled drivers build config 00:02:49.737 gpu/cuda: not in enabled drivers build config 00:02:49.737 00:02:49.737 00:02:49.737 Build targets in project: 215 00:02:49.737 00:02:49.737 DPDK 23.11.0 00:02:49.737 00:02:49.737 User defined options 00:02:49.737 libdir : lib 00:02:49.737 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:49.737 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:49.737 c_link_args : 00:02:49.737 enable_docs : false 00:02:49.737 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:49.737 enable_kmods : false 00:02:49.737 machine : native 00:02:49.737 tests : false 00:02:49.737 00:02:49.737 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:49.737 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:49.737 04:08:35 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:49.737 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:49.998 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:49.998 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:49.998 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:49.999 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:49.999 [5/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:49.999 [6/705] Linking static target lib/librte_kvargs.a 00:02:49.999 [7/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:49.999 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:49.999 [9/705] Linking static target lib/librte_log.a 00:02:49.999 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:50.260 [11/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.260 [12/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:50.260 [13/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:50.260 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:50.261 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:50.261 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:50.522 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.522 [18/705] Linking target lib/librte_log.so.24.0 00:02:50.522 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:50.522 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:50.522 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:50.522 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:50.522 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:50.522 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:50.784 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:50.784 [26/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:50.784 [27/705] Linking target lib/librte_kvargs.so.24.0 00:02:50.784 [28/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:50.784 [29/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:50.784 [30/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:50.784 [31/705] Linking static target lib/librte_telemetry.a 00:02:50.784 [32/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:50.784 [33/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:50.784 [34/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:51.045 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:51.045 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:51.045 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:51.045 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:51.045 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:51.045 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:51.045 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:51.306 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.306 [43/705] Linking target lib/librte_telemetry.so.24.0 00:02:51.306 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:51.306 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:51.306 [46/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:51.306 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:51.306 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:51.565 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:51.565 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:51.565 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:51.565 [52/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:51.565 [53/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:51.565 [54/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:51.565 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:51.565 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:51.823 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:51.823 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:51.823 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:51.823 [60/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:51.823 [61/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:51.823 [62/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:51.823 [63/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:51.823 [64/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:51.823 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:51.823 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:51.823 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:51.823 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:52.080 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:52.080 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:52.080 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:52.080 [72/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:52.080 [73/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:52.080 [74/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:52.080 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:52.080 [76/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:52.080 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:52.338 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:52.338 [79/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:52.338 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:52.338 [81/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:52.596 [82/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:52.596 [83/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:52.596 [84/705] Linking static target lib/librte_ring.a 00:02:52.596 [85/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:52.596 [86/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:52.596 [87/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:52.596 [88/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.596 [89/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:52.596 [90/705] Linking static target lib/librte_eal.a 00:02:52.854 [91/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:52.854 [92/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:52.855 [93/705] Linking static target lib/librte_mempool.a 00:02:52.855 [94/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:52.855 [95/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:53.113 [96/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:53.113 [97/705] Linking static target lib/librte_rcu.a 00:02:53.113 [98/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:53.113 [99/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:53.113 [100/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:53.113 [101/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:53.113 [102/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:53.113 [103/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.113 [104/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:53.371 [105/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.371 [106/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:53.371 [107/705] Linking static target lib/librte_mbuf.a 00:02:53.371 [108/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:53.371 [109/705] Linking static target lib/librte_meter.a 00:02:53.371 [110/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:53.371 [111/705] Linking static target lib/librte_net.a 00:02:53.371 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:53.371 [113/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:53.628 [114/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.628 [115/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.628 [116/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:53.628 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:53.628 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.886 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:53.886 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:54.144 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:54.144 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:54.144 [123/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:54.144 [124/705] Linking static target lib/librte_pci.a 00:02:54.402 [125/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:54.402 [126/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.402 [127/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:54.402 [128/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:54.402 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:54.402 [130/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:54.402 [131/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:54.402 [132/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:54.660 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:54.660 [134/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:54.660 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:54.660 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:54.660 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:54.660 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:54.660 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:54.660 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:54.660 [141/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:54.660 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:54.660 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:54.660 [144/705] Linking static target lib/librte_cmdline.a 00:02:54.918 [145/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:54.918 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:54.918 [147/705] Linking static target lib/librte_metrics.a 00:02:54.918 [148/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:55.177 [149/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:55.177 [150/705] Linking static target lib/librte_timer.a 00:02:55.177 [151/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:55.177 [152/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.435 [153/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:55.435 [154/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.435 [155/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.693 [156/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:55.693 [157/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:55.693 [158/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:55.951 [159/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:56.210 [160/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:56.210 [161/705] Linking static target lib/librte_bitratestats.a 00:02:56.210 [162/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:56.210 [163/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:56.210 [164/705] Linking static target lib/librte_bbdev.a 00:02:56.210 [165/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.468 [166/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:56.468 [167/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:56.468 [168/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:56.468 [169/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:56.468 [170/705] Linking static target lib/librte_ethdev.a 00:02:56.468 [171/705] Linking static target lib/librte_hash.a 00:02:56.468 [172/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:56.468 [173/705] Linking static target lib/acl/libavx2_tmp.a 00:02:56.727 [174/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.727 [175/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:56.727 [176/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:56.727 [177/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:56.986 [178/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:56.986 [179/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.986 [180/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:57.245 [181/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:57.245 [182/705] Linking static target lib/librte_cfgfile.a 00:02:57.245 [183/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:57.245 [184/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.245 [185/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:57.245 [186/705] Linking target lib/librte_eal.so.24.0 00:02:57.245 [187/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:57.245 [188/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:57.513 [189/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:57.513 [190/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.513 [191/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:57.513 [192/705] Linking target lib/librte_ring.so.24.0 00:02:57.513 [193/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:57.513 [194/705] Linking target lib/librte_pci.so.24.0 00:02:57.513 [195/705] Linking target lib/librte_meter.so.24.0 00:02:57.513 [196/705] Linking target lib/librte_timer.so.24.0 00:02:57.513 [197/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:57.513 [198/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:57.513 [199/705] Linking target lib/librte_rcu.so.24.0 00:02:57.513 [200/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:57.513 [201/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:57.513 [202/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:57.513 [203/705] Linking static target lib/librte_bpf.a 00:02:57.513 [204/705] Linking target lib/librte_mempool.so.24.0 00:02:57.513 [205/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:57.513 [206/705] Linking static target lib/librte_compressdev.a 00:02:57.513 [207/705] Linking target lib/librte_cfgfile.so.24.0 00:02:57.513 [208/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:57.803 [209/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:57.803 [210/705] Linking static target lib/librte_acl.a 00:02:57.803 [211/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:57.803 [212/705] Linking target lib/librte_mbuf.so.24.0 00:02:57.803 [213/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.803 [214/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:57.803 [215/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:57.803 [216/705] Linking target lib/librte_net.so.24.0 00:02:57.803 [217/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:57.803 [218/705] Linking target lib/librte_bbdev.so.24.0 00:02:57.803 [219/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.060 [220/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:58.060 [221/705] Linking target lib/librte_acl.so.24.0 00:02:58.060 [222/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.060 [223/705] Linking target lib/librte_cmdline.so.24.0 00:02:58.060 [224/705] Linking target lib/librte_hash.so.24.0 00:02:58.060 [225/705] Linking target lib/librte_compressdev.so.24.0 00:02:58.060 [226/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:58.060 [227/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:58.060 [228/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:58.060 [229/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:58.060 [230/705] Linking static target lib/librte_distributor.a 00:02:58.318 [231/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:58.318 [232/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.318 [233/705] Linking target lib/librte_distributor.so.24.0 00:02:58.318 [234/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:58.318 [235/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:58.318 [236/705] Linking static target lib/librte_dmadev.a 00:02:58.576 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:58.576 [238/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.576 [239/705] Linking target lib/librte_dmadev.so.24.0 00:02:58.576 [240/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:58.834 [241/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:58.834 [242/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:58.834 [243/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:58.834 [244/705] Linking static target lib/librte_efd.a 00:02:58.834 [245/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:59.092 [246/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:59.092 [247/705] Linking static target lib/librte_cryptodev.a 00:02:59.092 [248/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.092 [249/705] Linking target lib/librte_efd.so.24.0 00:02:59.092 [250/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:59.092 [251/705] Linking static target lib/librte_dispatcher.a 00:02:59.351 [252/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:59.351 [253/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:59.351 [254/705] Linking static target lib/librte_gpudev.a 00:02:59.609 [255/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:59.609 [256/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:59.609 [257/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.609 [258/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:59.867 [259/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:59.867 [260/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:59.867 [261/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:59.867 [262/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.867 [263/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:59.867 [264/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:59.867 [265/705] Linking target lib/librte_cryptodev.so.24.0 00:02:59.867 [266/705] Linking static target lib/librte_gro.a 00:02:59.867 [267/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.867 [268/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.125 [269/705] Linking target lib/librte_gpudev.so.24.0 00:03:00.125 [270/705] Linking target lib/librte_ethdev.so.24.0 00:03:00.125 [271/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:00.125 [272/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:00.125 [273/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:00.125 [274/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:00.125 [275/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.125 [276/705] Linking target lib/librte_metrics.so.24.0 00:03:00.125 [277/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:00.125 [278/705] Linking target lib/librte_bpf.so.24.0 00:03:00.125 [279/705] Linking target lib/librte_gro.so.24.0 00:03:00.125 [280/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:00.125 [281/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:00.125 [282/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:00.384 [283/705] Linking target lib/librte_bitratestats.so.24.0 00:03:00.384 [284/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:00.384 [285/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:00.384 [286/705] Linking static target lib/librte_gso.a 00:03:00.384 [287/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:00.384 [288/705] Linking static target lib/librte_eventdev.a 00:03:00.384 [289/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:00.384 [290/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.384 [291/705] Linking target lib/librte_gso.so.24.0 00:03:00.642 [292/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:00.642 [293/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:00.642 [294/705] Linking static target lib/librte_jobstats.a 00:03:00.642 [295/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:00.642 [296/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:00.642 [297/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:00.642 [298/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:00.642 [299/705] Linking static target lib/librte_latencystats.a 00:03:00.643 [300/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:00.643 [301/705] Linking static target lib/librte_ip_frag.a 00:03:00.901 [302/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.901 [303/705] Linking target lib/librte_jobstats.so.24.0 00:03:00.901 [304/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.901 [305/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.901 [306/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:00.901 [307/705] Linking target lib/librte_latencystats.so.24.0 00:03:00.901 [308/705] Linking target lib/librte_ip_frag.so.24.0 00:03:00.901 [309/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:00.901 [310/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:01.160 [311/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:01.160 [312/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:01.160 [313/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:01.160 [314/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:01.160 [315/705] Linking static target lib/librte_lpm.a 00:03:01.160 [316/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:01.418 [317/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:01.418 [318/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:01.418 [319/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:01.418 [320/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.418 [321/705] Linking target lib/librte_lpm.so.24.0 00:03:01.418 [322/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:01.418 [323/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:01.676 [324/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:01.676 [325/705] Linking static target lib/librte_pcapng.a 00:03:01.676 [326/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:01.676 [327/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:01.676 [328/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:01.677 [329/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:01.677 [330/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.677 [331/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:01.677 [332/705] Linking target lib/librte_pcapng.so.24.0 00:03:01.935 [333/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.935 [334/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:01.935 [335/705] Linking static target lib/librte_power.a 00:03:01.935 [336/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:01.935 [337/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:01.935 [338/705] Linking target lib/librte_eventdev.so.24.0 00:03:01.935 [339/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:01.935 [340/705] Linking static target lib/librte_rawdev.a 00:03:01.935 [341/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:01.935 [342/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:01.935 [343/705] Linking static target lib/librte_regexdev.a 00:03:01.935 [344/705] Linking static target lib/librte_member.a 00:03:01.935 [345/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:01.935 [346/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:01.935 [347/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:01.935 [348/705] Linking target lib/librte_dispatcher.so.24.0 00:03:02.193 [349/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:02.193 [350/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.193 [351/705] Linking target lib/librte_member.so.24.0 00:03:02.193 [352/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:02.193 [353/705] Linking static target lib/librte_mldev.a 00:03:02.193 [354/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.193 [355/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.451 [356/705] Linking target lib/librte_rawdev.so.24.0 00:03:02.451 [357/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:02.451 [358/705] Linking target lib/librte_power.so.24.0 00:03:02.451 [359/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:02.451 [360/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:02.451 [361/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:02.451 [362/705] Linking static target lib/librte_reorder.a 00:03:02.451 [363/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.451 [364/705] Linking target lib/librte_regexdev.so.24.0 00:03:02.451 [365/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:02.709 [366/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:02.709 [367/705] Linking static target lib/librte_rib.a 00:03:02.709 [368/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:02.709 [369/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:02.709 [370/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:02.709 [371/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.709 [372/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:02.709 [373/705] Linking static target lib/librte_stack.a 00:03:02.709 [374/705] Linking target lib/librte_reorder.so.24.0 00:03:02.709 [375/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:02.709 [376/705] Linking static target lib/librte_security.a 00:03:02.968 [377/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:02.968 [378/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.968 [379/705] Linking target lib/librte_rib.so.24.0 00:03:02.968 [380/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.968 [381/705] Linking target lib/librte_stack.so.24.0 00:03:02.968 [382/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:02.968 [383/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:02.968 [384/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:03.226 [385/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:03.226 [386/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.226 [387/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.226 [388/705] Linking target lib/librte_security.so.24.0 00:03:03.226 [389/705] Linking target lib/librte_mldev.so.24.0 00:03:03.226 [390/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:03.226 [391/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:03.226 [392/705] Linking static target lib/librte_sched.a 00:03:03.485 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:03.485 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:03.485 [395/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.485 [396/705] Linking target lib/librte_sched.so.24.0 00:03:03.485 [397/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:03.485 [398/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:03.744 [399/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:03.744 [400/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:03.744 [401/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:04.002 [402/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:04.002 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:04.002 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:04.002 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:04.002 [406/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:04.262 [407/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:04.262 [408/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:04.262 [409/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:04.262 [410/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:04.262 [411/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:04.262 [412/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:04.262 [413/705] Linking static target lib/librte_ipsec.a 00:03:04.521 [414/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.521 [415/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:04.521 [416/705] Linking target lib/librte_ipsec.so.24.0 00:03:04.521 [417/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:04.521 [418/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:04.779 [419/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:04.779 [420/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:04.779 [421/705] Linking static target lib/librte_fib.a 00:03:04.779 [422/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:04.779 [423/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:05.037 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:05.037 [425/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:05.037 [426/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:05.037 [427/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.037 [428/705] Linking target lib/librte_fib.so.24.0 00:03:05.296 [429/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:05.296 [430/705] Linking static target lib/librte_pdcp.a 00:03:05.296 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.296 [432/705] Linking target lib/librte_pdcp.so.24.0 00:03:05.296 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:05.554 [434/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:05.554 [435/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:05.554 [436/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:05.554 [437/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:05.554 [438/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:05.813 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:05.813 [440/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:05.813 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:05.813 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:05.813 [443/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:05.813 [444/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:06.071 [445/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:06.071 [446/705] Linking static target lib/librte_port.a 00:03:06.071 [447/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:06.071 [448/705] Linking static target lib/librte_pdump.a 00:03:06.071 [449/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:06.071 [450/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:06.330 [451/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:06.330 [452/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.330 [453/705] Linking target lib/librte_pdump.so.24.0 00:03:06.330 [454/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.330 [455/705] Linking target lib/librte_port.so.24.0 00:03:06.588 [456/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:06.588 [457/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:06.588 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:06.588 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:06.588 [460/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:06.588 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:06.847 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:06.847 [463/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:06.847 [464/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:06.847 [465/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:06.847 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:06.847 [467/705] Linking static target lib/librte_table.a 00:03:07.105 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:07.105 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:07.363 [470/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:07.363 [471/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.363 [472/705] Linking target lib/librte_table.so.24.0 00:03:07.363 [473/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:07.363 [474/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:07.363 [475/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:07.620 [476/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:07.620 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:07.620 [478/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:07.620 [479/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:07.878 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:07.878 [481/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:07.878 [482/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:08.135 [483/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:08.135 [484/705] Linking static target lib/librte_graph.a 00:03:08.135 [485/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:08.135 [486/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:08.135 [487/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:08.135 [488/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:08.401 [489/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:08.401 [490/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.401 [491/705] Linking target lib/librte_graph.so.24.0 00:03:08.401 [492/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:08.669 [493/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:08.669 [494/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:08.669 [495/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:08.669 [496/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:08.669 [497/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:08.669 [498/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:08.926 [499/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:08.926 [500/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:08.926 [501/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:08.926 [502/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:09.200 [503/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:09.200 [504/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:09.200 [505/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:09.200 [506/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:09.200 [507/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:09.200 [508/705] Linking static target lib/librte_node.a 00:03:09.200 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:09.200 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:09.458 [511/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.458 [512/705] Linking target lib/librte_node.so.24.0 00:03:09.458 [513/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:09.458 [514/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:09.716 [515/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:09.716 [516/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:09.716 [517/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:09.716 [518/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:09.716 [519/705] Linking static target drivers/librte_bus_vdev.a 00:03:09.716 [520/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:09.716 [521/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:09.716 [522/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:09.716 [523/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:09.716 [524/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:09.716 [525/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:09.716 [526/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:09.716 [527/705] Linking static target drivers/librte_bus_pci.a 00:03:09.716 [528/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.974 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:03:09.974 [530/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:09.974 [531/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:09.974 [532/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:09.974 [533/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:09.974 [534/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:10.231 [535/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:10.231 [536/705] Linking static target drivers/librte_mempool_ring.a 00:03:10.231 [537/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:10.231 [538/705] Linking target drivers/librte_mempool_ring.so.24.0 00:03:10.231 [539/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.231 [540/705] Linking target drivers/librte_bus_pci.so.24.0 00:03:10.231 [541/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:10.489 [542/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:10.489 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:10.746 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:10.746 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:11.312 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:11.312 [547/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:11.312 [548/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:11.312 [549/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:11.312 [550/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:11.312 [551/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:11.312 [552/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:11.570 [553/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:11.830 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:11.830 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:11.830 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:11.830 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:12.089 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:12.089 [559/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:12.089 [560/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:12.348 [561/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:12.348 [562/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:12.348 [563/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:12.606 [564/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:12.606 [565/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:12.606 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:12.606 [567/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:12.606 [568/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:12.606 [569/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:12.606 [570/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:12.865 [571/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:12.865 [572/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:12.865 [573/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:13.123 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:13.123 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:13.123 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:13.123 [577/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:13.382 [578/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:13.382 [579/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:13.382 [580/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:13.382 [581/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:13.382 [582/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:13.382 [583/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:13.640 [584/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:13.640 [585/705] Linking static target drivers/librte_net_i40e.a 00:03:13.640 [586/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:13.640 [587/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:13.640 [588/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:13.899 [589/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.899 [590/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:13.899 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:13.899 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:13.899 [593/705] Linking target drivers/librte_net_i40e.so.24.0 00:03:14.157 [594/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:14.157 [595/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:14.157 [596/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:14.158 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:14.416 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:14.416 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:14.416 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:14.675 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:14.675 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:14.675 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:14.675 [604/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:14.675 [605/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:14.675 [606/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:14.933 [607/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:14.933 [608/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:14.933 [609/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:14.933 [610/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:14.933 [611/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:15.191 [612/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:15.191 [613/705] Linking static target lib/librte_vhost.a 00:03:15.191 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:15.191 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:15.191 [616/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:15.756 [617/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:15.756 [618/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.756 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:15.756 [620/705] Linking target lib/librte_vhost.so.24.0 00:03:15.756 [621/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:16.063 [622/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:16.063 [623/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:16.063 [624/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:16.063 [625/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:16.063 [626/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:16.063 [627/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:16.322 [628/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:16.322 [629/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:16.322 [630/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:16.322 [631/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:16.322 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:16.580 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:16.580 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:16.580 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:16.580 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:16.580 [637/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:16.580 [638/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:16.838 [639/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:16.838 [640/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:16.838 [641/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:16.838 [642/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:16.838 [643/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:17.097 [644/705] Linking static target lib/librte_pipeline.a 00:03:17.097 [645/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:17.097 [646/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:17.097 [647/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:17.097 [648/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:17.097 [649/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:17.097 [650/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:17.097 [651/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:17.356 [652/705] Linking target app/dpdk-dumpcap 00:03:17.356 [653/705] Linking target app/dpdk-graph 00:03:17.356 [654/705] Linking target app/dpdk-pdump 00:03:17.356 [655/705] Linking target app/dpdk-proc-info 00:03:17.356 [656/705] Linking target app/dpdk-test-acl 00:03:17.356 [657/705] Linking target app/dpdk-test-compress-perf 00:03:17.614 [658/705] Linking target app/dpdk-test-crypto-perf 00:03:17.614 [659/705] Linking target app/dpdk-test-cmdline 00:03:17.614 [660/705] Linking target app/dpdk-test-dma-perf 00:03:17.614 [661/705] Linking target app/dpdk-test-fib 00:03:17.614 [662/705] Linking target app/dpdk-test-eventdev 00:03:17.614 [663/705] Linking target app/dpdk-test-flow-perf 00:03:17.614 [664/705] Linking target app/dpdk-test-gpudev 00:03:17.872 [665/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:17.872 [666/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:18.131 [667/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:18.131 [668/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:18.131 [669/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:18.389 [670/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:18.389 [671/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:18.389 [672/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:18.389 [673/705] Linking target app/dpdk-test-mldev 00:03:18.647 [674/705] Linking target app/dpdk-test-pipeline 00:03:18.647 [675/705] Linking target app/dpdk-test-bbdev 00:03:18.647 [676/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:18.906 [677/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:18.906 [678/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:18.906 [679/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.906 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:18.906 [681/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:18.906 [682/705] Linking target lib/librte_pipeline.so.24.0 00:03:18.906 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:19.164 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:19.164 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:19.164 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:19.164 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:19.423 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:19.423 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:19.423 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:19.681 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:19.681 [692/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:19.940 [693/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:19.940 [694/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:19.940 [695/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:19.940 [696/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:20.198 [697/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:20.198 [698/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:20.198 [699/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:20.198 [700/705] Linking target app/dpdk-test-sad 00:03:20.456 [701/705] Linking target app/dpdk-test-regex 00:03:20.456 [702/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:20.456 [703/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:20.714 [704/705] Linking target app/dpdk-test-security-perf 00:03:20.714 [705/705] Linking target app/dpdk-testpmd 00:03:20.714 04:09:06 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:20.714 04:09:06 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:20.714 04:09:06 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:20.972 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:20.972 [0/1] Installing files. 00:03:21.234 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.234 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.235 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:21.236 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.237 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:21.238 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:21.238 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.238 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.239 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.499 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.499 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.499 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.499 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:21.499 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.499 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:21.499 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.499 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:21.499 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.499 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:21.499 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.499 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.499 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.499 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.499 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.499 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.499 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.499 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.499 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.499 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.499 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.499 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.500 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.500 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.500 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.500 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.500 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.500 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.500 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.500 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.501 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.502 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:21.503 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:21.503 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:21.503 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:21.503 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:21.503 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:21.503 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:21.503 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:21.503 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:21.503 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:21.503 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:21.503 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:21.503 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:21.503 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:21.503 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:21.503 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:21.503 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:21.503 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:21.503 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:21.503 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:21.503 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:21.503 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:21.503 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:21.503 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:21.503 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:21.503 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:21.503 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:21.503 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:21.503 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:21.503 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:21.503 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:21.503 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:21.503 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:21.503 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:21.503 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:21.503 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:21.503 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:21.503 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:21.503 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:21.503 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:21.503 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:21.503 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:21.503 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:21.503 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:21.503 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:21.503 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:21.503 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:21.503 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:21.503 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:21.503 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:21.503 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:21.503 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:21.503 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:21.503 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:21.503 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:21.503 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:21.503 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:21.503 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:21.503 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:21.503 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:21.503 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:21.503 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:21.503 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:21.503 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:21.503 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:21.503 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:21.503 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:21.503 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:21.503 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:21.503 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:21.503 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:21.503 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:21.503 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:21.503 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:21.503 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:21.503 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:21.503 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:21.503 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:21.503 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:21.503 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:21.503 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:21.503 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:21.503 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:21.503 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:21.503 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:21.503 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:21.503 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:21.503 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:21.503 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:21.503 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:21.504 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:21.504 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:21.504 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:21.504 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:21.504 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:21.504 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:21.504 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:21.504 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:21.504 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:21.504 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:21.504 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:21.504 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:21.504 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:21.504 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:21.504 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:21.504 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:21.504 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:21.504 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:21.504 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:21.504 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:21.504 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:21.504 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:21.504 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:21.504 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:21.504 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:21.504 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:21.504 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:21.504 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:21.504 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:21.504 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:21.504 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:21.504 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:21.504 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:21.504 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:21.504 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:21.504 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:21.504 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:21.504 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:21.504 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:21.504 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:21.504 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:21.504 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:21.504 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:21.504 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:21.504 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:21.504 04:09:07 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:21.504 04:09:07 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:21.504 00:03:21.504 real 0m37.575s 00:03:21.504 user 4m22.802s 00:03:21.504 sys 0m38.622s 00:03:21.504 04:09:07 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:21.504 ************************************ 00:03:21.504 END TEST build_native_dpdk 00:03:21.504 ************************************ 00:03:21.504 04:09:07 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:21.504 04:09:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:21.504 04:09:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:21.504 04:09:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:21.504 04:09:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:21.504 04:09:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:21.504 04:09:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:21.504 04:09:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:21.504 04:09:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:21.762 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:21.762 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.762 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:21.762 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:22.019 Using 'verbs' RDMA provider 00:03:33.414 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:43.415 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:43.672 Creating mk/config.mk...done. 00:03:43.672 Creating mk/cc.flags.mk...done. 00:03:43.672 Type 'make' to build. 00:03:43.672 04:09:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:43.672 04:09:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:43.672 04:09:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:43.672 04:09:29 -- common/autotest_common.sh@10 -- $ set +x 00:03:43.672 ************************************ 00:03:43.672 START TEST make 00:03:43.672 ************************************ 00:03:43.672 04:09:29 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:43.929 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:43.929 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:43.929 meson setup builddir \ 00:03:43.929 -Dwith-libaio=enabled \ 00:03:43.929 -Dwith-liburing=enabled \ 00:03:43.929 -Dwith-libvfn=disabled \ 00:03:43.929 -Dwith-spdk=disabled \ 00:03:43.929 -Dexamples=false \ 00:03:43.929 -Dtests=false \ 00:03:43.929 -Dtools=false && \ 00:03:43.929 meson compile -C builddir && \ 00:03:43.929 cd -) 00:03:43.929 make[1]: Nothing to be done for 'all'. 00:03:45.833 The Meson build system 00:03:45.833 Version: 1.5.0 00:03:45.833 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:45.833 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:45.833 Build type: native build 00:03:45.833 Project name: xnvme 00:03:45.833 Project version: 0.7.5 00:03:45.833 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:45.833 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:45.833 Host machine cpu family: x86_64 00:03:45.833 Host machine cpu: x86_64 00:03:45.833 Message: host_machine.system: linux 00:03:45.833 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:45.833 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:45.833 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:45.833 Run-time dependency threads found: YES 00:03:45.833 Has header "setupapi.h" : NO 00:03:45.833 Has header "linux/blkzoned.h" : YES 00:03:45.833 Has header "linux/blkzoned.h" : YES (cached) 00:03:45.833 Has header "libaio.h" : YES 00:03:45.833 Library aio found: YES 00:03:45.833 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:45.833 Run-time dependency liburing found: YES 2.2 00:03:45.833 Dependency libvfn skipped: feature with-libvfn disabled 00:03:45.833 Found CMake: /usr/bin/cmake (3.27.7) 00:03:45.833 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:45.833 Subproject spdk : skipped: feature with-spdk disabled 00:03:45.833 Run-time dependency appleframeworks found: NO (tried framework) 00:03:45.833 Run-time dependency appleframeworks found: NO (tried framework) 00:03:45.833 Library rt found: YES 00:03:45.833 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:45.833 Configuring xnvme_config.h using configuration 00:03:45.833 Configuring xnvme.spec using configuration 00:03:45.833 Run-time dependency bash-completion found: YES 2.11 00:03:45.833 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:45.833 Program cp found: YES (/usr/bin/cp) 00:03:45.833 Build targets in project: 3 00:03:45.833 00:03:45.833 xnvme 0.7.5 00:03:45.833 00:03:45.833 Subprojects 00:03:45.833 spdk : NO Feature 'with-spdk' disabled 00:03:45.833 00:03:45.833 User defined options 00:03:45.833 examples : false 00:03:45.833 tests : false 00:03:45.833 tools : false 00:03:45.833 with-libaio : enabled 00:03:45.833 with-liburing: enabled 00:03:45.833 with-libvfn : disabled 00:03:45.833 with-spdk : disabled 00:03:45.833 00:03:45.833 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:46.095 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:46.095 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:46.095 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:46.095 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:46.095 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:46.095 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:46.095 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:46.095 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:46.095 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:46.095 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:46.095 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:46.095 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:46.095 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:46.095 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:46.357 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:46.357 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:46.357 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:46.357 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:46.357 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:46.357 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:46.357 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:46.357 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:46.357 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:46.357 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:46.357 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:46.357 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:46.357 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:46.357 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:46.357 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:46.357 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:46.357 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:46.357 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:46.357 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:46.357 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:46.357 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:46.357 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:46.357 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:46.357 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:46.357 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:46.357 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:46.357 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:46.357 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:46.357 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:46.357 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:46.357 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:46.357 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:46.357 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:46.357 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:46.357 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:46.357 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:46.357 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:46.357 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:46.357 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:46.619 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:46.619 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:46.619 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:46.619 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:46.619 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:46.619 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:46.619 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:46.619 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:46.619 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:46.619 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:46.619 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:46.619 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:46.619 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:46.619 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:46.619 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:46.619 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:46.619 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:46.619 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:46.619 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:46.879 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:46.879 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:47.138 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:47.138 [75/76] Linking static target lib/libxnvme.a 00:03:47.138 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:47.138 INFO: autodetecting backend as ninja 00:03:47.138 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:47.138 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:19.228 CC lib/log/log.o 00:04:19.228 CC lib/log/log_flags.o 00:04:19.228 CC lib/log/log_deprecated.o 00:04:19.228 CC lib/ut_mock/mock.o 00:04:19.228 CC lib/ut/ut.o 00:04:19.228 LIB libspdk_ut_mock.a 00:04:19.228 LIB libspdk_log.a 00:04:19.228 LIB libspdk_ut.a 00:04:19.228 SO libspdk_ut_mock.so.6.0 00:04:19.228 SO libspdk_log.so.7.1 00:04:19.228 SO libspdk_ut.so.2.0 00:04:19.228 SYMLINK libspdk_ut_mock.so 00:04:19.228 SYMLINK libspdk_log.so 00:04:19.228 SYMLINK libspdk_ut.so 00:04:19.228 CC lib/util/base64.o 00:04:19.228 CC lib/util/cpuset.o 00:04:19.228 CC lib/util/crc16.o 00:04:19.228 CC lib/util/bit_array.o 00:04:19.228 CC lib/util/crc32.o 00:04:19.228 CC lib/util/crc32c.o 00:04:19.228 CC lib/dma/dma.o 00:04:19.228 CC lib/ioat/ioat.o 00:04:19.228 CXX lib/trace_parser/trace.o 00:04:19.228 CC lib/vfio_user/host/vfio_user_pci.o 00:04:19.228 CC lib/util/crc32_ieee.o 00:04:19.228 CC lib/util/crc64.o 00:04:19.228 CC lib/util/dif.o 00:04:19.228 CC lib/util/fd.o 00:04:19.228 CC lib/util/fd_group.o 00:04:19.228 CC lib/util/file.o 00:04:19.228 LIB libspdk_dma.a 00:04:19.228 CC lib/util/hexlify.o 00:04:19.228 SO libspdk_dma.so.5.0 00:04:19.228 CC lib/util/iov.o 00:04:19.228 CC lib/util/math.o 00:04:19.228 SYMLINK libspdk_dma.so 00:04:19.228 CC lib/util/net.o 00:04:19.228 LIB libspdk_ioat.a 00:04:19.228 CC lib/util/pipe.o 00:04:19.228 SO libspdk_ioat.so.7.0 00:04:19.228 CC lib/vfio_user/host/vfio_user.o 00:04:19.228 SYMLINK libspdk_ioat.so 00:04:19.228 CC lib/util/strerror_tls.o 00:04:19.228 CC lib/util/string.o 00:04:19.228 CC lib/util/uuid.o 00:04:19.228 CC lib/util/xor.o 00:04:19.228 CC lib/util/zipf.o 00:04:19.228 CC lib/util/md5.o 00:04:19.228 LIB libspdk_vfio_user.a 00:04:19.228 SO libspdk_vfio_user.so.5.0 00:04:19.228 SYMLINK libspdk_vfio_user.so 00:04:19.228 LIB libspdk_util.a 00:04:19.228 LIB libspdk_trace_parser.a 00:04:19.228 SO libspdk_trace_parser.so.6.0 00:04:19.228 SO libspdk_util.so.10.1 00:04:19.228 SYMLINK libspdk_trace_parser.so 00:04:19.228 SYMLINK libspdk_util.so 00:04:19.228 CC lib/vmd/vmd.o 00:04:19.228 CC lib/vmd/led.o 00:04:19.228 CC lib/json/json_parse.o 00:04:19.228 CC lib/json/json_util.o 00:04:19.228 CC lib/json/json_write.o 00:04:19.228 CC lib/rdma_utils/rdma_utils.o 00:04:19.228 CC lib/idxd/idxd.o 00:04:19.228 CC lib/env_dpdk/env.o 00:04:19.228 CC lib/idxd/idxd_user.o 00:04:19.228 CC lib/conf/conf.o 00:04:19.228 CC lib/env_dpdk/memory.o 00:04:19.228 CC lib/env_dpdk/pci.o 00:04:19.228 LIB libspdk_conf.a 00:04:19.228 CC lib/env_dpdk/init.o 00:04:19.228 LIB libspdk_json.a 00:04:19.228 CC lib/idxd/idxd_kernel.o 00:04:19.228 SO libspdk_conf.so.6.0 00:04:19.228 SO libspdk_json.so.6.0 00:04:19.228 LIB libspdk_rdma_utils.a 00:04:19.228 SYMLINK libspdk_conf.so 00:04:19.228 CC lib/env_dpdk/threads.o 00:04:19.228 SO libspdk_rdma_utils.so.1.0 00:04:19.228 SYMLINK libspdk_json.so 00:04:19.228 CC lib/env_dpdk/pci_ioat.o 00:04:19.229 SYMLINK libspdk_rdma_utils.so 00:04:19.229 CC lib/env_dpdk/pci_virtio.o 00:04:19.229 CC lib/env_dpdk/pci_vmd.o 00:04:19.229 CC lib/jsonrpc/jsonrpc_server.o 00:04:19.229 CC lib/rdma_provider/common.o 00:04:19.229 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:19.229 CC lib/env_dpdk/pci_idxd.o 00:04:19.229 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:19.229 CC lib/jsonrpc/jsonrpc_client.o 00:04:19.229 CC lib/env_dpdk/pci_event.o 00:04:19.229 CC lib/env_dpdk/sigbus_handler.o 00:04:19.229 LIB libspdk_idxd.a 00:04:19.229 LIB libspdk_vmd.a 00:04:19.229 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:19.229 CC lib/env_dpdk/pci_dpdk.o 00:04:19.229 SO libspdk_idxd.so.12.1 00:04:19.229 SO libspdk_vmd.so.6.0 00:04:19.229 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:19.229 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:19.229 SYMLINK libspdk_idxd.so 00:04:19.229 SYMLINK libspdk_vmd.so 00:04:19.229 LIB libspdk_rdma_provider.a 00:04:19.229 SO libspdk_rdma_provider.so.7.0 00:04:19.229 SYMLINK libspdk_rdma_provider.so 00:04:19.229 LIB libspdk_jsonrpc.a 00:04:19.229 SO libspdk_jsonrpc.so.6.0 00:04:19.229 SYMLINK libspdk_jsonrpc.so 00:04:19.229 CC lib/rpc/rpc.o 00:04:19.488 LIB libspdk_rpc.a 00:04:19.488 LIB libspdk_env_dpdk.a 00:04:19.488 SO libspdk_rpc.so.6.0 00:04:19.746 SYMLINK libspdk_rpc.so 00:04:19.746 SO libspdk_env_dpdk.so.15.1 00:04:19.746 SYMLINK libspdk_env_dpdk.so 00:04:19.746 CC lib/keyring/keyring.o 00:04:19.746 CC lib/notify/notify_rpc.o 00:04:19.746 CC lib/keyring/keyring_rpc.o 00:04:19.746 CC lib/notify/notify.o 00:04:19.746 CC lib/trace/trace.o 00:04:19.746 CC lib/trace/trace_rpc.o 00:04:19.746 CC lib/trace/trace_flags.o 00:04:20.005 LIB libspdk_notify.a 00:04:20.005 SO libspdk_notify.so.6.0 00:04:20.005 SYMLINK libspdk_notify.so 00:04:20.005 LIB libspdk_keyring.a 00:04:20.005 LIB libspdk_trace.a 00:04:20.005 SO libspdk_keyring.so.2.0 00:04:20.005 SO libspdk_trace.so.11.0 00:04:20.005 SYMLINK libspdk_keyring.so 00:04:20.005 SYMLINK libspdk_trace.so 00:04:20.264 CC lib/sock/sock.o 00:04:20.264 CC lib/sock/sock_rpc.o 00:04:20.264 CC lib/thread/iobuf.o 00:04:20.264 CC lib/thread/thread.o 00:04:20.831 LIB libspdk_sock.a 00:04:20.831 SO libspdk_sock.so.10.0 00:04:20.831 SYMLINK libspdk_sock.so 00:04:21.088 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:21.088 CC lib/nvme/nvme_ctrlr.o 00:04:21.088 CC lib/nvme/nvme_fabric.o 00:04:21.088 CC lib/nvme/nvme_ns_cmd.o 00:04:21.088 CC lib/nvme/nvme_qpair.o 00:04:21.088 CC lib/nvme/nvme_pcie.o 00:04:21.088 CC lib/nvme/nvme_ns.o 00:04:21.088 CC lib/nvme/nvme_pcie_common.o 00:04:21.088 CC lib/nvme/nvme.o 00:04:21.653 CC lib/nvme/nvme_quirks.o 00:04:21.653 CC lib/nvme/nvme_transport.o 00:04:21.653 CC lib/nvme/nvme_discovery.o 00:04:21.653 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:21.910 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:21.910 CC lib/nvme/nvme_tcp.o 00:04:21.910 LIB libspdk_thread.a 00:04:21.910 CC lib/nvme/nvme_opal.o 00:04:21.910 SO libspdk_thread.so.11.0 00:04:21.910 CC lib/nvme/nvme_io_msg.o 00:04:21.910 SYMLINK libspdk_thread.so 00:04:21.910 CC lib/nvme/nvme_poll_group.o 00:04:22.168 CC lib/nvme/nvme_zns.o 00:04:22.168 CC lib/nvme/nvme_stubs.o 00:04:22.168 CC lib/nvme/nvme_auth.o 00:04:22.168 CC lib/accel/accel.o 00:04:22.426 CC lib/nvme/nvme_cuse.o 00:04:22.426 CC lib/nvme/nvme_rdma.o 00:04:22.426 CC lib/accel/accel_rpc.o 00:04:22.684 CC lib/blob/blobstore.o 00:04:22.684 CC lib/accel/accel_sw.o 00:04:22.684 CC lib/init/json_config.o 00:04:22.684 CC lib/virtio/virtio.o 00:04:22.943 CC lib/init/subsystem.o 00:04:22.943 CC lib/blob/request.o 00:04:22.943 CC lib/virtio/virtio_vhost_user.o 00:04:22.943 CC lib/blob/zeroes.o 00:04:23.201 CC lib/init/subsystem_rpc.o 00:04:23.201 CC lib/blob/blob_bs_dev.o 00:04:23.201 CC lib/virtio/virtio_vfio_user.o 00:04:23.201 CC lib/init/rpc.o 00:04:23.201 CC lib/virtio/virtio_pci.o 00:04:23.201 CC lib/fsdev/fsdev.o 00:04:23.201 CC lib/fsdev/fsdev_io.o 00:04:23.201 LIB libspdk_accel.a 00:04:23.460 CC lib/fsdev/fsdev_rpc.o 00:04:23.460 SO libspdk_accel.so.16.0 00:04:23.460 LIB libspdk_init.a 00:04:23.460 SO libspdk_init.so.6.0 00:04:23.460 SYMLINK libspdk_accel.so 00:04:23.460 SYMLINK libspdk_init.so 00:04:23.460 LIB libspdk_virtio.a 00:04:23.460 SO libspdk_virtio.so.7.0 00:04:23.460 CC lib/bdev/bdev.o 00:04:23.460 CC lib/bdev/scsi_nvme.o 00:04:23.460 CC lib/bdev/part.o 00:04:23.460 CC lib/bdev/bdev_zone.o 00:04:23.717 CC lib/bdev/bdev_rpc.o 00:04:23.717 SYMLINK libspdk_virtio.so 00:04:23.717 CC lib/event/app.o 00:04:23.717 CC lib/event/reactor.o 00:04:23.717 LIB libspdk_nvme.a 00:04:23.717 CC lib/event/log_rpc.o 00:04:23.717 CC lib/event/app_rpc.o 00:04:23.975 CC lib/event/scheduler_static.o 00:04:23.975 SO libspdk_nvme.so.15.0 00:04:23.975 LIB libspdk_fsdev.a 00:04:23.975 SO libspdk_fsdev.so.2.0 00:04:23.975 SYMLINK libspdk_fsdev.so 00:04:23.975 LIB libspdk_event.a 00:04:24.234 SO libspdk_event.so.14.0 00:04:24.234 SYMLINK libspdk_nvme.so 00:04:24.234 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:24.234 SYMLINK libspdk_event.so 00:04:24.800 LIB libspdk_fuse_dispatcher.a 00:04:24.800 SO libspdk_fuse_dispatcher.so.1.0 00:04:24.800 SYMLINK libspdk_fuse_dispatcher.so 00:04:25.734 LIB libspdk_blob.a 00:04:25.734 SO libspdk_blob.so.11.0 00:04:25.992 SYMLINK libspdk_blob.so 00:04:25.992 CC lib/blobfs/blobfs.o 00:04:25.992 CC lib/blobfs/tree.o 00:04:25.992 CC lib/lvol/lvol.o 00:04:26.583 LIB libspdk_bdev.a 00:04:26.583 SO libspdk_bdev.so.17.0 00:04:26.583 SYMLINK libspdk_bdev.so 00:04:26.583 CC lib/ftl/ftl_core.o 00:04:26.583 CC lib/ftl/ftl_layout.o 00:04:26.583 CC lib/ftl/ftl_debug.o 00:04:26.583 CC lib/ftl/ftl_init.o 00:04:26.583 CC lib/nvmf/ctrlr.o 00:04:26.583 CC lib/scsi/dev.o 00:04:26.583 CC lib/ublk/ublk.o 00:04:26.841 CC lib/nbd/nbd.o 00:04:26.841 CC lib/nbd/nbd_rpc.o 00:04:26.841 CC lib/scsi/lun.o 00:04:26.841 LIB libspdk_blobfs.a 00:04:26.841 CC lib/ublk/ublk_rpc.o 00:04:26.841 SO libspdk_blobfs.so.10.0 00:04:27.100 CC lib/scsi/port.o 00:04:27.100 SYMLINK libspdk_blobfs.so 00:04:27.100 CC lib/ftl/ftl_io.o 00:04:27.100 CC lib/scsi/scsi.o 00:04:27.100 CC lib/ftl/ftl_sb.o 00:04:27.100 LIB libspdk_lvol.a 00:04:27.100 SO libspdk_lvol.so.10.0 00:04:27.100 CC lib/ftl/ftl_l2p.o 00:04:27.100 LIB libspdk_nbd.a 00:04:27.100 SYMLINK libspdk_lvol.so 00:04:27.100 CC lib/nvmf/ctrlr_discovery.o 00:04:27.100 CC lib/nvmf/ctrlr_bdev.o 00:04:27.100 CC lib/nvmf/subsystem.o 00:04:27.100 SO libspdk_nbd.so.7.0 00:04:27.100 CC lib/scsi/scsi_bdev.o 00:04:27.100 CC lib/ftl/ftl_l2p_flat.o 00:04:27.100 SYMLINK libspdk_nbd.so 00:04:27.100 CC lib/nvmf/nvmf.o 00:04:27.358 CC lib/ftl/ftl_nv_cache.o 00:04:27.358 CC lib/ftl/ftl_band.o 00:04:27.358 CC lib/ftl/ftl_band_ops.o 00:04:27.358 LIB libspdk_ublk.a 00:04:27.358 SO libspdk_ublk.so.3.0 00:04:27.358 SYMLINK libspdk_ublk.so 00:04:27.358 CC lib/ftl/ftl_writer.o 00:04:27.614 CC lib/ftl/ftl_rq.o 00:04:27.614 CC lib/ftl/ftl_reloc.o 00:04:27.614 CC lib/ftl/ftl_l2p_cache.o 00:04:27.614 CC lib/ftl/ftl_p2l.o 00:04:27.614 CC lib/scsi/scsi_pr.o 00:04:27.872 CC lib/ftl/ftl_p2l_log.o 00:04:27.872 CC lib/nvmf/nvmf_rpc.o 00:04:27.872 CC lib/ftl/mngt/ftl_mngt.o 00:04:28.130 CC lib/scsi/scsi_rpc.o 00:04:28.130 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:28.130 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:28.130 CC lib/scsi/task.o 00:04:28.130 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:28.130 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:28.130 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:28.130 CC lib/nvmf/transport.o 00:04:28.130 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:28.388 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:28.388 LIB libspdk_scsi.a 00:04:28.388 CC lib/nvmf/tcp.o 00:04:28.388 SO libspdk_scsi.so.9.0 00:04:28.388 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:28.388 SYMLINK libspdk_scsi.so 00:04:28.388 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:28.388 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:28.388 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:28.388 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:28.388 CC lib/ftl/utils/ftl_conf.o 00:04:28.646 CC lib/ftl/utils/ftl_md.o 00:04:28.646 CC lib/nvmf/stubs.o 00:04:28.646 CC lib/nvmf/mdns_server.o 00:04:28.646 CC lib/iscsi/conn.o 00:04:28.646 CC lib/ftl/utils/ftl_mempool.o 00:04:28.646 CC lib/nvmf/rdma.o 00:04:28.646 CC lib/vhost/vhost.o 00:04:28.646 CC lib/vhost/vhost_rpc.o 00:04:28.904 CC lib/vhost/vhost_scsi.o 00:04:28.904 CC lib/iscsi/init_grp.o 00:04:28.904 CC lib/nvmf/auth.o 00:04:29.161 CC lib/ftl/utils/ftl_bitmap.o 00:04:29.161 CC lib/vhost/vhost_blk.o 00:04:29.161 CC lib/iscsi/iscsi.o 00:04:29.161 CC lib/ftl/utils/ftl_property.o 00:04:29.161 CC lib/iscsi/param.o 00:04:29.161 CC lib/vhost/rte_vhost_user.o 00:04:29.419 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:29.419 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:29.419 CC lib/iscsi/portal_grp.o 00:04:29.419 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:29.419 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:29.676 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:29.676 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:29.676 CC lib/iscsi/tgt_node.o 00:04:29.676 CC lib/iscsi/iscsi_subsystem.o 00:04:29.676 CC lib/iscsi/iscsi_rpc.o 00:04:29.676 CC lib/iscsi/task.o 00:04:29.676 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:29.676 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:29.933 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:29.933 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:29.933 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:29.933 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:29.933 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:29.933 CC lib/ftl/base/ftl_base_dev.o 00:04:29.933 CC lib/ftl/base/ftl_base_bdev.o 00:04:29.933 CC lib/ftl/ftl_trace.o 00:04:29.933 LIB libspdk_vhost.a 00:04:30.190 SO libspdk_vhost.so.8.0 00:04:30.190 SYMLINK libspdk_vhost.so 00:04:30.190 LIB libspdk_ftl.a 00:04:30.448 SO libspdk_ftl.so.9.0 00:04:30.448 LIB libspdk_iscsi.a 00:04:30.706 SO libspdk_iscsi.so.8.0 00:04:30.706 SYMLINK libspdk_ftl.so 00:04:30.706 LIB libspdk_nvmf.a 00:04:30.706 SYMLINK libspdk_iscsi.so 00:04:30.706 SO libspdk_nvmf.so.20.0 00:04:30.963 SYMLINK libspdk_nvmf.so 00:04:31.220 CC module/env_dpdk/env_dpdk_rpc.o 00:04:31.220 CC module/accel/ioat/accel_ioat.o 00:04:31.220 CC module/fsdev/aio/fsdev_aio.o 00:04:31.220 CC module/accel/error/accel_error.o 00:04:31.220 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:31.220 CC module/accel/dsa/accel_dsa.o 00:04:31.220 CC module/accel/iaa/accel_iaa.o 00:04:31.220 CC module/blob/bdev/blob_bdev.o 00:04:31.220 CC module/sock/posix/posix.o 00:04:31.220 CC module/keyring/file/keyring.o 00:04:31.220 LIB libspdk_env_dpdk_rpc.a 00:04:31.220 SO libspdk_env_dpdk_rpc.so.6.0 00:04:31.220 SYMLINK libspdk_env_dpdk_rpc.so 00:04:31.220 CC module/accel/ioat/accel_ioat_rpc.o 00:04:31.220 CC module/accel/error/accel_error_rpc.o 00:04:31.479 CC module/keyring/file/keyring_rpc.o 00:04:31.479 LIB libspdk_scheduler_dynamic.a 00:04:31.479 CC module/accel/iaa/accel_iaa_rpc.o 00:04:31.479 SO libspdk_scheduler_dynamic.so.4.0 00:04:31.479 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:31.479 LIB libspdk_blob_bdev.a 00:04:31.479 SYMLINK libspdk_scheduler_dynamic.so 00:04:31.479 CC module/accel/dsa/accel_dsa_rpc.o 00:04:31.479 SO libspdk_blob_bdev.so.11.0 00:04:31.479 LIB libspdk_accel_ioat.a 00:04:31.479 LIB libspdk_keyring_file.a 00:04:31.479 LIB libspdk_accel_iaa.a 00:04:31.479 LIB libspdk_accel_error.a 00:04:31.479 SO libspdk_accel_ioat.so.6.0 00:04:31.479 SO libspdk_keyring_file.so.2.0 00:04:31.479 SYMLINK libspdk_blob_bdev.so 00:04:31.479 SO libspdk_accel_iaa.so.3.0 00:04:31.479 CC module/fsdev/aio/linux_aio_mgr.o 00:04:31.479 SO libspdk_accel_error.so.2.0 00:04:31.479 LIB libspdk_accel_dsa.a 00:04:31.479 SYMLINK libspdk_accel_ioat.so 00:04:31.479 SYMLINK libspdk_accel_iaa.so 00:04:31.479 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:31.479 SO libspdk_accel_dsa.so.5.0 00:04:31.479 SYMLINK libspdk_keyring_file.so 00:04:31.479 SYMLINK libspdk_accel_error.so 00:04:31.479 SYMLINK libspdk_accel_dsa.so 00:04:31.736 LIB libspdk_scheduler_dpdk_governor.a 00:04:31.736 CC module/keyring/linux/keyring.o 00:04:31.736 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:31.736 CC module/scheduler/gscheduler/gscheduler.o 00:04:31.736 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:31.736 CC module/keyring/linux/keyring_rpc.o 00:04:31.736 CC module/blobfs/bdev/blobfs_bdev.o 00:04:31.736 CC module/bdev/error/vbdev_error.o 00:04:31.736 CC module/bdev/delay/vbdev_delay.o 00:04:31.736 CC module/bdev/lvol/vbdev_lvol.o 00:04:31.736 CC module/bdev/gpt/gpt.o 00:04:31.736 LIB libspdk_fsdev_aio.a 00:04:31.736 CC module/bdev/gpt/vbdev_gpt.o 00:04:31.736 SO libspdk_fsdev_aio.so.1.0 00:04:31.736 LIB libspdk_keyring_linux.a 00:04:31.736 LIB libspdk_scheduler_gscheduler.a 00:04:31.736 SO libspdk_keyring_linux.so.1.0 00:04:31.736 SYMLINK libspdk_fsdev_aio.so 00:04:31.736 SO libspdk_scheduler_gscheduler.so.4.0 00:04:31.736 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:31.736 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:31.993 SYMLINK libspdk_keyring_linux.so 00:04:31.993 SYMLINK libspdk_scheduler_gscheduler.so 00:04:31.993 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:31.993 CC module/bdev/error/vbdev_error_rpc.o 00:04:31.993 LIB libspdk_sock_posix.a 00:04:31.993 LIB libspdk_blobfs_bdev.a 00:04:31.993 CC module/bdev/malloc/bdev_malloc.o 00:04:31.993 SO libspdk_sock_posix.so.6.0 00:04:31.993 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:31.993 LIB libspdk_bdev_gpt.a 00:04:31.993 SO libspdk_blobfs_bdev.so.6.0 00:04:31.993 CC module/bdev/null/bdev_null.o 00:04:31.993 LIB libspdk_bdev_delay.a 00:04:31.994 SO libspdk_bdev_gpt.so.6.0 00:04:31.994 SO libspdk_bdev_delay.so.6.0 00:04:31.994 LIB libspdk_bdev_error.a 00:04:31.994 SYMLINK libspdk_blobfs_bdev.so 00:04:31.994 SYMLINK libspdk_sock_posix.so 00:04:31.994 SYMLINK libspdk_bdev_gpt.so 00:04:31.994 CC module/bdev/null/bdev_null_rpc.o 00:04:31.994 SO libspdk_bdev_error.so.6.0 00:04:32.252 SYMLINK libspdk_bdev_delay.so 00:04:32.252 SYMLINK libspdk_bdev_error.so 00:04:32.252 LIB libspdk_bdev_lvol.a 00:04:32.252 SO libspdk_bdev_lvol.so.6.0 00:04:32.252 CC module/bdev/nvme/bdev_nvme.o 00:04:32.252 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:32.252 CC module/bdev/passthru/vbdev_passthru.o 00:04:32.252 SYMLINK libspdk_bdev_lvol.so 00:04:32.252 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:32.252 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:32.252 CC module/bdev/raid/bdev_raid.o 00:04:32.252 CC module/bdev/split/vbdev_split.o 00:04:32.252 LIB libspdk_bdev_null.a 00:04:32.252 CC module/bdev/xnvme/bdev_xnvme.o 00:04:32.252 SO libspdk_bdev_null.so.6.0 00:04:32.252 SYMLINK libspdk_bdev_null.so 00:04:32.252 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:32.252 LIB libspdk_bdev_malloc.a 00:04:32.509 SO libspdk_bdev_malloc.so.6.0 00:04:32.509 CC module/bdev/raid/bdev_raid_rpc.o 00:04:32.509 SYMLINK libspdk_bdev_malloc.so 00:04:32.509 CC module/bdev/raid/bdev_raid_sb.o 00:04:32.509 CC module/bdev/raid/raid0.o 00:04:32.509 CC module/bdev/split/vbdev_split_rpc.o 00:04:32.509 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:32.509 LIB libspdk_bdev_xnvme.a 00:04:32.509 LIB libspdk_bdev_zone_block.a 00:04:32.509 SO libspdk_bdev_xnvme.so.3.0 00:04:32.509 SO libspdk_bdev_zone_block.so.6.0 00:04:32.509 CC module/bdev/raid/raid1.o 00:04:32.768 LIB libspdk_bdev_split.a 00:04:32.768 SYMLINK libspdk_bdev_xnvme.so 00:04:32.768 CC module/bdev/raid/concat.o 00:04:32.768 CC module/bdev/nvme/nvme_rpc.o 00:04:32.768 SO libspdk_bdev_split.so.6.0 00:04:32.768 SYMLINK libspdk_bdev_zone_block.so 00:04:32.768 CC module/bdev/nvme/bdev_mdns_client.o 00:04:32.768 LIB libspdk_bdev_passthru.a 00:04:32.768 CC module/bdev/nvme/vbdev_opal.o 00:04:32.768 SYMLINK libspdk_bdev_split.so 00:04:32.768 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:32.768 SO libspdk_bdev_passthru.so.6.0 00:04:32.768 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:32.768 SYMLINK libspdk_bdev_passthru.so 00:04:33.027 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:33.027 CC module/bdev/ftl/bdev_ftl.o 00:04:33.027 CC module/bdev/aio/bdev_aio.o 00:04:33.027 CC module/bdev/aio/bdev_aio_rpc.o 00:04:33.027 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:33.027 CC module/bdev/iscsi/bdev_iscsi.o 00:04:33.027 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:33.027 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:33.027 LIB libspdk_bdev_raid.a 00:04:33.027 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:33.027 SO libspdk_bdev_raid.so.6.0 00:04:33.287 SYMLINK libspdk_bdev_raid.so 00:04:33.287 LIB libspdk_bdev_ftl.a 00:04:33.287 LIB libspdk_bdev_iscsi.a 00:04:33.287 SO libspdk_bdev_ftl.so.6.0 00:04:33.287 SO libspdk_bdev_iscsi.so.6.0 00:04:33.287 LIB libspdk_bdev_aio.a 00:04:33.287 SYMLINK libspdk_bdev_ftl.so 00:04:33.287 SYMLINK libspdk_bdev_iscsi.so 00:04:33.287 SO libspdk_bdev_aio.so.6.0 00:04:33.545 SYMLINK libspdk_bdev_aio.so 00:04:33.545 LIB libspdk_bdev_virtio.a 00:04:33.545 SO libspdk_bdev_virtio.so.6.0 00:04:33.803 SYMLINK libspdk_bdev_virtio.so 00:04:34.369 LIB libspdk_bdev_nvme.a 00:04:34.369 SO libspdk_bdev_nvme.so.7.1 00:04:34.627 SYMLINK libspdk_bdev_nvme.so 00:04:34.884 CC module/event/subsystems/fsdev/fsdev.o 00:04:34.884 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:34.884 CC module/event/subsystems/iobuf/iobuf.o 00:04:34.884 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:34.884 CC module/event/subsystems/keyring/keyring.o 00:04:34.884 CC module/event/subsystems/sock/sock.o 00:04:34.884 CC module/event/subsystems/scheduler/scheduler.o 00:04:34.884 CC module/event/subsystems/vmd/vmd.o 00:04:34.884 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:34.884 LIB libspdk_event_vhost_blk.a 00:04:35.142 LIB libspdk_event_keyring.a 00:04:35.142 SO libspdk_event_vhost_blk.so.3.0 00:04:35.142 LIB libspdk_event_fsdev.a 00:04:35.142 LIB libspdk_event_scheduler.a 00:04:35.142 LIB libspdk_event_sock.a 00:04:35.142 SO libspdk_event_keyring.so.1.0 00:04:35.142 LIB libspdk_event_vmd.a 00:04:35.143 SO libspdk_event_fsdev.so.1.0 00:04:35.143 SO libspdk_event_scheduler.so.4.0 00:04:35.143 SO libspdk_event_sock.so.5.0 00:04:35.143 LIB libspdk_event_iobuf.a 00:04:35.143 SYMLINK libspdk_event_vhost_blk.so 00:04:35.143 SO libspdk_event_vmd.so.6.0 00:04:35.143 SO libspdk_event_iobuf.so.3.0 00:04:35.143 SYMLINK libspdk_event_keyring.so 00:04:35.143 SYMLINK libspdk_event_fsdev.so 00:04:35.143 SYMLINK libspdk_event_sock.so 00:04:35.143 SYMLINK libspdk_event_scheduler.so 00:04:35.143 SYMLINK libspdk_event_vmd.so 00:04:35.143 SYMLINK libspdk_event_iobuf.so 00:04:35.400 CC module/event/subsystems/accel/accel.o 00:04:35.400 LIB libspdk_event_accel.a 00:04:35.400 SO libspdk_event_accel.so.6.0 00:04:35.658 SYMLINK libspdk_event_accel.so 00:04:35.916 CC module/event/subsystems/bdev/bdev.o 00:04:35.916 LIB libspdk_event_bdev.a 00:04:35.916 SO libspdk_event_bdev.so.6.0 00:04:36.175 SYMLINK libspdk_event_bdev.so 00:04:36.175 CC module/event/subsystems/scsi/scsi.o 00:04:36.175 CC module/event/subsystems/ublk/ublk.o 00:04:36.175 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:36.175 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:36.176 CC module/event/subsystems/nbd/nbd.o 00:04:36.435 LIB libspdk_event_ublk.a 00:04:36.435 LIB libspdk_event_nbd.a 00:04:36.435 SO libspdk_event_nbd.so.6.0 00:04:36.435 SO libspdk_event_ublk.so.3.0 00:04:36.435 LIB libspdk_event_scsi.a 00:04:36.435 SYMLINK libspdk_event_ublk.so 00:04:36.435 SYMLINK libspdk_event_nbd.so 00:04:36.435 SO libspdk_event_scsi.so.6.0 00:04:36.435 LIB libspdk_event_nvmf.a 00:04:36.435 SO libspdk_event_nvmf.so.6.0 00:04:36.435 SYMLINK libspdk_event_scsi.so 00:04:36.435 SYMLINK libspdk_event_nvmf.so 00:04:36.693 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:36.693 CC module/event/subsystems/iscsi/iscsi.o 00:04:36.693 LIB libspdk_event_vhost_scsi.a 00:04:36.693 SO libspdk_event_vhost_scsi.so.3.0 00:04:36.693 LIB libspdk_event_iscsi.a 00:04:36.951 SYMLINK libspdk_event_vhost_scsi.so 00:04:36.951 SO libspdk_event_iscsi.so.6.0 00:04:36.951 SYMLINK libspdk_event_iscsi.so 00:04:36.951 SO libspdk.so.6.0 00:04:36.951 SYMLINK libspdk.so 00:04:37.210 CC app/trace_record/trace_record.o 00:04:37.210 CXX app/trace/trace.o 00:04:37.210 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:37.210 CC app/iscsi_tgt/iscsi_tgt.o 00:04:37.210 CC app/nvmf_tgt/nvmf_main.o 00:04:37.210 CC test/thread/poller_perf/poller_perf.o 00:04:37.210 CC examples/ioat/perf/perf.o 00:04:37.210 CC app/spdk_tgt/spdk_tgt.o 00:04:37.211 CC examples/util/zipf/zipf.o 00:04:37.211 CC test/dma/test_dma/test_dma.o 00:04:37.468 LINK poller_perf 00:04:37.468 LINK zipf 00:04:37.468 LINK iscsi_tgt 00:04:37.468 LINK interrupt_tgt 00:04:37.468 LINK nvmf_tgt 00:04:37.468 LINK spdk_tgt 00:04:37.468 LINK spdk_trace_record 00:04:37.468 LINK ioat_perf 00:04:37.468 LINK spdk_trace 00:04:37.468 CC examples/ioat/verify/verify.o 00:04:37.727 CC app/spdk_lspci/spdk_lspci.o 00:04:37.727 CC app/spdk_nvme_perf/perf.o 00:04:37.727 CC app/spdk_nvme_identify/identify.o 00:04:37.727 CC app/spdk_nvme_discover/discovery_aer.o 00:04:37.727 CC test/app/bdev_svc/bdev_svc.o 00:04:37.727 LINK test_dma 00:04:37.727 CC test/app/histogram_perf/histogram_perf.o 00:04:37.727 LINK verify 00:04:37.727 CC examples/thread/thread/thread_ex.o 00:04:37.727 LINK spdk_lspci 00:04:37.727 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:37.727 LINK histogram_perf 00:04:37.727 LINK bdev_svc 00:04:37.727 LINK spdk_nvme_discover 00:04:37.727 CC test/app/jsoncat/jsoncat.o 00:04:37.985 CC test/app/stub/stub.o 00:04:37.985 LINK thread 00:04:37.985 CC app/spdk_top/spdk_top.o 00:04:37.985 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:37.985 LINK jsoncat 00:04:37.985 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:37.985 LINK stub 00:04:37.985 CC app/vhost/vhost.o 00:04:37.985 LINK nvme_fuzz 00:04:37.985 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:38.244 CC examples/sock/hello_world/hello_sock.o 00:04:38.244 CC app/spdk_dd/spdk_dd.o 00:04:38.244 LINK vhost 00:04:38.244 TEST_HEADER include/spdk/accel.h 00:04:38.244 TEST_HEADER include/spdk/accel_module.h 00:04:38.244 TEST_HEADER include/spdk/assert.h 00:04:38.244 TEST_HEADER include/spdk/barrier.h 00:04:38.244 TEST_HEADER include/spdk/base64.h 00:04:38.244 TEST_HEADER include/spdk/bdev.h 00:04:38.244 TEST_HEADER include/spdk/bdev_module.h 00:04:38.244 TEST_HEADER include/spdk/bdev_zone.h 00:04:38.244 TEST_HEADER include/spdk/bit_array.h 00:04:38.244 TEST_HEADER include/spdk/bit_pool.h 00:04:38.244 TEST_HEADER include/spdk/blob_bdev.h 00:04:38.244 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:38.244 TEST_HEADER include/spdk/blobfs.h 00:04:38.244 TEST_HEADER include/spdk/blob.h 00:04:38.244 TEST_HEADER include/spdk/conf.h 00:04:38.244 TEST_HEADER include/spdk/config.h 00:04:38.244 TEST_HEADER include/spdk/cpuset.h 00:04:38.244 TEST_HEADER include/spdk/crc16.h 00:04:38.244 TEST_HEADER include/spdk/crc32.h 00:04:38.244 TEST_HEADER include/spdk/crc64.h 00:04:38.244 TEST_HEADER include/spdk/dif.h 00:04:38.244 TEST_HEADER include/spdk/dma.h 00:04:38.244 TEST_HEADER include/spdk/endian.h 00:04:38.244 TEST_HEADER include/spdk/env_dpdk.h 00:04:38.244 TEST_HEADER include/spdk/env.h 00:04:38.244 TEST_HEADER include/spdk/event.h 00:04:38.244 TEST_HEADER include/spdk/fd_group.h 00:04:38.244 CC app/fio/nvme/fio_plugin.o 00:04:38.244 TEST_HEADER include/spdk/fd.h 00:04:38.244 TEST_HEADER include/spdk/file.h 00:04:38.244 TEST_HEADER include/spdk/fsdev.h 00:04:38.244 TEST_HEADER include/spdk/fsdev_module.h 00:04:38.244 TEST_HEADER include/spdk/ftl.h 00:04:38.244 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:38.244 TEST_HEADER include/spdk/gpt_spec.h 00:04:38.244 TEST_HEADER include/spdk/hexlify.h 00:04:38.244 TEST_HEADER include/spdk/histogram_data.h 00:04:38.244 TEST_HEADER include/spdk/idxd.h 00:04:38.244 TEST_HEADER include/spdk/idxd_spec.h 00:04:38.244 TEST_HEADER include/spdk/init.h 00:04:38.244 TEST_HEADER include/spdk/ioat.h 00:04:38.244 TEST_HEADER include/spdk/ioat_spec.h 00:04:38.244 TEST_HEADER include/spdk/iscsi_spec.h 00:04:38.244 TEST_HEADER include/spdk/json.h 00:04:38.244 TEST_HEADER include/spdk/jsonrpc.h 00:04:38.244 TEST_HEADER include/spdk/keyring.h 00:04:38.244 TEST_HEADER include/spdk/keyring_module.h 00:04:38.244 TEST_HEADER include/spdk/likely.h 00:04:38.244 TEST_HEADER include/spdk/log.h 00:04:38.244 TEST_HEADER include/spdk/lvol.h 00:04:38.244 TEST_HEADER include/spdk/md5.h 00:04:38.244 TEST_HEADER include/spdk/memory.h 00:04:38.244 TEST_HEADER include/spdk/mmio.h 00:04:38.244 TEST_HEADER include/spdk/nbd.h 00:04:38.244 TEST_HEADER include/spdk/net.h 00:04:38.244 TEST_HEADER include/spdk/notify.h 00:04:38.244 TEST_HEADER include/spdk/nvme.h 00:04:38.244 TEST_HEADER include/spdk/nvme_intel.h 00:04:38.244 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:38.244 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:38.244 TEST_HEADER include/spdk/nvme_spec.h 00:04:38.244 TEST_HEADER include/spdk/nvme_zns.h 00:04:38.244 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:38.502 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:38.502 TEST_HEADER include/spdk/nvmf.h 00:04:38.502 TEST_HEADER include/spdk/nvmf_spec.h 00:04:38.502 TEST_HEADER include/spdk/nvmf_transport.h 00:04:38.502 TEST_HEADER include/spdk/opal.h 00:04:38.502 TEST_HEADER include/spdk/opal_spec.h 00:04:38.502 TEST_HEADER include/spdk/pci_ids.h 00:04:38.502 TEST_HEADER include/spdk/pipe.h 00:04:38.502 TEST_HEADER include/spdk/queue.h 00:04:38.502 LINK hello_sock 00:04:38.503 TEST_HEADER include/spdk/reduce.h 00:04:38.503 TEST_HEADER include/spdk/rpc.h 00:04:38.503 TEST_HEADER include/spdk/scheduler.h 00:04:38.503 TEST_HEADER include/spdk/scsi.h 00:04:38.503 TEST_HEADER include/spdk/scsi_spec.h 00:04:38.503 TEST_HEADER include/spdk/sock.h 00:04:38.503 TEST_HEADER include/spdk/stdinc.h 00:04:38.503 TEST_HEADER include/spdk/string.h 00:04:38.503 TEST_HEADER include/spdk/thread.h 00:04:38.503 TEST_HEADER include/spdk/trace.h 00:04:38.503 TEST_HEADER include/spdk/trace_parser.h 00:04:38.503 TEST_HEADER include/spdk/tree.h 00:04:38.503 TEST_HEADER include/spdk/ublk.h 00:04:38.503 TEST_HEADER include/spdk/util.h 00:04:38.503 TEST_HEADER include/spdk/uuid.h 00:04:38.503 TEST_HEADER include/spdk/version.h 00:04:38.503 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:38.503 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:38.503 TEST_HEADER include/spdk/vhost.h 00:04:38.503 TEST_HEADER include/spdk/vmd.h 00:04:38.503 TEST_HEADER include/spdk/xor.h 00:04:38.503 TEST_HEADER include/spdk/zipf.h 00:04:38.503 CXX test/cpp_headers/accel.o 00:04:38.503 LINK spdk_nvme_perf 00:04:38.503 LINK spdk_nvme_identify 00:04:38.503 CC app/fio/bdev/fio_plugin.o 00:04:38.503 LINK vhost_fuzz 00:04:38.503 LINK spdk_dd 00:04:38.503 CXX test/cpp_headers/accel_module.o 00:04:38.503 CXX test/cpp_headers/assert.o 00:04:38.761 LINK spdk_top 00:04:38.761 CC examples/vmd/lsvmd/lsvmd.o 00:04:38.761 CC test/event/event_perf/event_perf.o 00:04:38.761 CC test/env/mem_callbacks/mem_callbacks.o 00:04:38.761 CXX test/cpp_headers/barrier.o 00:04:38.761 CC test/event/reactor/reactor.o 00:04:38.761 CXX test/cpp_headers/base64.o 00:04:38.761 CC test/event/reactor_perf/reactor_perf.o 00:04:38.761 LINK lsvmd 00:04:38.761 LINK spdk_nvme 00:04:38.761 LINK event_perf 00:04:38.761 CXX test/cpp_headers/bdev.o 00:04:38.761 LINK reactor_perf 00:04:39.020 LINK reactor 00:04:39.020 LINK spdk_bdev 00:04:39.020 CC test/rpc_client/rpc_client_test.o 00:04:39.020 CC examples/vmd/led/led.o 00:04:39.020 CXX test/cpp_headers/bdev_module.o 00:04:39.020 CC test/nvme/aer/aer.o 00:04:39.020 CC test/accel/dif/dif.o 00:04:39.020 LINK led 00:04:39.020 CC examples/idxd/perf/perf.o 00:04:39.020 CC test/event/app_repeat/app_repeat.o 00:04:39.020 CC test/blobfs/mkfs/mkfs.o 00:04:39.020 LINK rpc_client_test 00:04:39.278 CXX test/cpp_headers/bdev_zone.o 00:04:39.278 CXX test/cpp_headers/bit_array.o 00:04:39.278 LINK aer 00:04:39.278 LINK mem_callbacks 00:04:39.278 LINK app_repeat 00:04:39.278 LINK iscsi_fuzz 00:04:39.278 LINK mkfs 00:04:39.278 CXX test/cpp_headers/bit_pool.o 00:04:39.278 CC test/nvme/reset/reset.o 00:04:39.278 CXX test/cpp_headers/blob_bdev.o 00:04:39.278 CC test/env/vtophys/vtophys.o 00:04:39.537 CXX test/cpp_headers/blobfs_bdev.o 00:04:39.537 CXX test/cpp_headers/blobfs.o 00:04:39.537 LINK vtophys 00:04:39.537 CC test/event/scheduler/scheduler.o 00:04:39.537 LINK idxd_perf 00:04:39.537 CC test/nvme/sgl/sgl.o 00:04:39.537 CC test/lvol/esnap/esnap.o 00:04:39.537 CC test/nvme/e2edp/nvme_dp.o 00:04:39.537 LINK reset 00:04:39.537 CXX test/cpp_headers/blob.o 00:04:39.537 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:39.537 CC test/nvme/overhead/overhead.o 00:04:39.796 LINK scheduler 00:04:39.796 CXX test/cpp_headers/conf.o 00:04:39.796 LINK sgl 00:04:39.796 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:39.796 LINK dif 00:04:39.796 LINK env_dpdk_post_init 00:04:39.796 LINK nvme_dp 00:04:39.796 CXX test/cpp_headers/config.o 00:04:39.796 CXX test/cpp_headers/cpuset.o 00:04:39.796 CC examples/accel/perf/accel_perf.o 00:04:39.796 LINK overhead 00:04:40.054 CC test/env/memory/memory_ut.o 00:04:40.054 CXX test/cpp_headers/crc16.o 00:04:40.054 CC examples/nvme/hello_world/hello_world.o 00:04:40.054 CC examples/blob/hello_world/hello_blob.o 00:04:40.054 CC test/env/pci/pci_ut.o 00:04:40.054 LINK hello_fsdev 00:04:40.054 CC examples/blob/cli/blobcli.o 00:04:40.054 CC test/nvme/err_injection/err_injection.o 00:04:40.054 CXX test/cpp_headers/crc32.o 00:04:40.054 LINK hello_world 00:04:40.313 LINK hello_blob 00:04:40.313 CC test/nvme/startup/startup.o 00:04:40.313 CXX test/cpp_headers/crc64.o 00:04:40.313 LINK err_injection 00:04:40.313 LINK accel_perf 00:04:40.313 CC examples/nvme/reconnect/reconnect.o 00:04:40.313 LINK startup 00:04:40.313 CXX test/cpp_headers/dif.o 00:04:40.313 CC test/nvme/reserve/reserve.o 00:04:40.313 LINK blobcli 00:04:40.313 LINK pci_ut 00:04:40.313 CXX test/cpp_headers/dma.o 00:04:40.571 CC test/nvme/simple_copy/simple_copy.o 00:04:40.571 LINK reconnect 00:04:40.571 CXX test/cpp_headers/endian.o 00:04:40.571 LINK reserve 00:04:40.571 CC test/nvme/connect_stress/connect_stress.o 00:04:40.571 CC test/nvme/boot_partition/boot_partition.o 00:04:40.571 CC test/nvme/compliance/nvme_compliance.o 00:04:40.571 CC examples/bdev/hello_world/hello_bdev.o 00:04:40.571 CXX test/cpp_headers/env_dpdk.o 00:04:40.571 CXX test/cpp_headers/env.o 00:04:40.829 LINK simple_copy 00:04:40.829 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:40.829 LINK boot_partition 00:04:40.829 LINK connect_stress 00:04:40.829 LINK memory_ut 00:04:40.829 CXX test/cpp_headers/event.o 00:04:40.829 LINK hello_bdev 00:04:40.829 CC examples/nvme/arbitration/arbitration.o 00:04:40.829 CXX test/cpp_headers/fd_group.o 00:04:40.829 LINK nvme_compliance 00:04:40.829 CXX test/cpp_headers/fd.o 00:04:41.088 CC examples/bdev/bdevperf/bdevperf.o 00:04:41.088 CC examples/nvme/hotplug/hotplug.o 00:04:41.088 CC test/bdev/bdevio/bdevio.o 00:04:41.088 CXX test/cpp_headers/file.o 00:04:41.088 CC test/nvme/fused_ordering/fused_ordering.o 00:04:41.088 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:41.088 CXX test/cpp_headers/fsdev.o 00:04:41.088 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:41.088 LINK hotplug 00:04:41.088 LINK arbitration 00:04:41.088 LINK cmb_copy 00:04:41.088 LINK fused_ordering 00:04:41.346 CXX test/cpp_headers/fsdev_module.o 00:04:41.346 LINK nvme_manage 00:04:41.346 CXX test/cpp_headers/ftl.o 00:04:41.346 LINK doorbell_aers 00:04:41.346 CXX test/cpp_headers/fuse_dispatcher.o 00:04:41.346 CC test/nvme/fdp/fdp.o 00:04:41.346 LINK bdevio 00:04:41.346 CC examples/nvme/abort/abort.o 00:04:41.346 CC test/nvme/cuse/cuse.o 00:04:41.346 CXX test/cpp_headers/gpt_spec.o 00:04:41.346 CXX test/cpp_headers/hexlify.o 00:04:41.346 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:41.605 CXX test/cpp_headers/histogram_data.o 00:04:41.605 CXX test/cpp_headers/idxd.o 00:04:41.605 CXX test/cpp_headers/idxd_spec.o 00:04:41.605 LINK pmr_persistence 00:04:41.605 CXX test/cpp_headers/init.o 00:04:41.605 LINK bdevperf 00:04:41.605 CXX test/cpp_headers/ioat.o 00:04:41.605 CXX test/cpp_headers/ioat_spec.o 00:04:41.605 CXX test/cpp_headers/iscsi_spec.o 00:04:41.605 CXX test/cpp_headers/json.o 00:04:41.605 CXX test/cpp_headers/jsonrpc.o 00:04:41.605 LINK fdp 00:04:41.863 CXX test/cpp_headers/keyring.o 00:04:41.863 CXX test/cpp_headers/keyring_module.o 00:04:41.863 CXX test/cpp_headers/likely.o 00:04:41.863 LINK abort 00:04:41.863 CXX test/cpp_headers/log.o 00:04:41.863 CXX test/cpp_headers/lvol.o 00:04:41.863 CXX test/cpp_headers/md5.o 00:04:41.863 CXX test/cpp_headers/memory.o 00:04:41.863 CXX test/cpp_headers/mmio.o 00:04:41.863 CXX test/cpp_headers/nbd.o 00:04:41.863 CXX test/cpp_headers/net.o 00:04:41.863 CXX test/cpp_headers/notify.o 00:04:41.863 CXX test/cpp_headers/nvme.o 00:04:41.863 CXX test/cpp_headers/nvme_intel.o 00:04:41.863 CXX test/cpp_headers/nvme_ocssd.o 00:04:41.863 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:42.122 CXX test/cpp_headers/nvme_spec.o 00:04:42.122 CXX test/cpp_headers/nvme_zns.o 00:04:42.122 CXX test/cpp_headers/nvmf_cmd.o 00:04:42.122 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:42.122 CXX test/cpp_headers/nvmf.o 00:04:42.122 CC examples/nvmf/nvmf/nvmf.o 00:04:42.122 CXX test/cpp_headers/nvmf_spec.o 00:04:42.122 CXX test/cpp_headers/nvmf_transport.o 00:04:42.122 CXX test/cpp_headers/opal.o 00:04:42.122 CXX test/cpp_headers/opal_spec.o 00:04:42.122 CXX test/cpp_headers/pci_ids.o 00:04:42.122 CXX test/cpp_headers/pipe.o 00:04:42.122 CXX test/cpp_headers/queue.o 00:04:42.122 CXX test/cpp_headers/reduce.o 00:04:42.381 CXX test/cpp_headers/rpc.o 00:04:42.381 CXX test/cpp_headers/scheduler.o 00:04:42.381 CXX test/cpp_headers/scsi.o 00:04:42.381 LINK nvmf 00:04:42.381 CXX test/cpp_headers/scsi_spec.o 00:04:42.381 CXX test/cpp_headers/sock.o 00:04:42.381 CXX test/cpp_headers/stdinc.o 00:04:42.381 LINK cuse 00:04:42.381 CXX test/cpp_headers/string.o 00:04:42.381 CXX test/cpp_headers/thread.o 00:04:42.381 CXX test/cpp_headers/trace.o 00:04:42.381 CXX test/cpp_headers/trace_parser.o 00:04:42.381 CXX test/cpp_headers/tree.o 00:04:42.381 CXX test/cpp_headers/ublk.o 00:04:42.381 CXX test/cpp_headers/util.o 00:04:42.381 CXX test/cpp_headers/uuid.o 00:04:42.381 CXX test/cpp_headers/version.o 00:04:42.381 CXX test/cpp_headers/vfio_user_pci.o 00:04:42.381 CXX test/cpp_headers/vfio_user_spec.o 00:04:42.381 CXX test/cpp_headers/vhost.o 00:04:42.640 CXX test/cpp_headers/vmd.o 00:04:42.640 CXX test/cpp_headers/xor.o 00:04:42.640 CXX test/cpp_headers/zipf.o 00:04:43.578 LINK esnap 00:04:43.841 00:04:43.841 real 1m0.355s 00:04:43.841 user 5m6.188s 00:04:43.841 sys 0m51.165s 00:04:43.841 04:10:29 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:43.841 ************************************ 00:04:43.841 END TEST make 00:04:43.841 ************************************ 00:04:43.841 04:10:29 make -- common/autotest_common.sh@10 -- $ set +x 00:04:44.100 04:10:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:44.100 04:10:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:44.100 04:10:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:44.100 04:10:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:44.100 04:10:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:44.100 04:10:29 -- pm/common@44 -- $ pid=5812 00:04:44.100 04:10:29 -- pm/common@50 -- $ kill -TERM 5812 00:04:44.100 04:10:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:44.100 04:10:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:44.100 04:10:29 -- pm/common@44 -- $ pid=5813 00:04:44.100 04:10:29 -- pm/common@50 -- $ kill -TERM 5813 00:04:44.100 04:10:29 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:44.100 04:10:29 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:44.100 04:10:29 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.100 04:10:29 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.100 04:10:29 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.100 04:10:29 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.100 04:10:29 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.100 04:10:29 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.100 04:10:29 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.100 04:10:29 -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.100 04:10:29 -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.100 04:10:29 -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.100 04:10:29 -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.100 04:10:29 -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.100 04:10:29 -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.100 04:10:29 -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.100 04:10:29 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.100 04:10:29 -- scripts/common.sh@344 -- # case "$op" in 00:04:44.100 04:10:29 -- scripts/common.sh@345 -- # : 1 00:04:44.100 04:10:29 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.100 04:10:29 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.100 04:10:29 -- scripts/common.sh@365 -- # decimal 1 00:04:44.100 04:10:29 -- scripts/common.sh@353 -- # local d=1 00:04:44.100 04:10:29 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.100 04:10:29 -- scripts/common.sh@355 -- # echo 1 00:04:44.100 04:10:29 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.100 04:10:29 -- scripts/common.sh@366 -- # decimal 2 00:04:44.100 04:10:29 -- scripts/common.sh@353 -- # local d=2 00:04:44.100 04:10:29 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.100 04:10:29 -- scripts/common.sh@355 -- # echo 2 00:04:44.100 04:10:29 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.101 04:10:29 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.101 04:10:29 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.101 04:10:29 -- scripts/common.sh@368 -- # return 0 00:04:44.101 04:10:29 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.101 04:10:29 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.101 --rc genhtml_branch_coverage=1 00:04:44.101 --rc genhtml_function_coverage=1 00:04:44.101 --rc genhtml_legend=1 00:04:44.101 --rc geninfo_all_blocks=1 00:04:44.101 --rc geninfo_unexecuted_blocks=1 00:04:44.101 00:04:44.101 ' 00:04:44.101 04:10:29 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.101 --rc genhtml_branch_coverage=1 00:04:44.101 --rc genhtml_function_coverage=1 00:04:44.101 --rc genhtml_legend=1 00:04:44.101 --rc geninfo_all_blocks=1 00:04:44.101 --rc geninfo_unexecuted_blocks=1 00:04:44.101 00:04:44.101 ' 00:04:44.101 04:10:29 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.101 --rc genhtml_branch_coverage=1 00:04:44.101 --rc genhtml_function_coverage=1 00:04:44.101 --rc genhtml_legend=1 00:04:44.101 --rc geninfo_all_blocks=1 00:04:44.101 --rc geninfo_unexecuted_blocks=1 00:04:44.101 00:04:44.101 ' 00:04:44.101 04:10:29 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.101 --rc genhtml_branch_coverage=1 00:04:44.101 --rc genhtml_function_coverage=1 00:04:44.101 --rc genhtml_legend=1 00:04:44.101 --rc geninfo_all_blocks=1 00:04:44.101 --rc geninfo_unexecuted_blocks=1 00:04:44.101 00:04:44.101 ' 00:04:44.101 04:10:29 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:44.101 04:10:29 -- nvmf/common.sh@7 -- # uname -s 00:04:44.101 04:10:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.101 04:10:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.101 04:10:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.101 04:10:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.101 04:10:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.101 04:10:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.101 04:10:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.101 04:10:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.101 04:10:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.101 04:10:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.101 04:10:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:707b909b-d9e3-4a2c-b9ec-709ea86a88f1 00:04:44.101 04:10:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=707b909b-d9e3-4a2c-b9ec-709ea86a88f1 00:04:44.101 04:10:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.101 04:10:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.101 04:10:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.101 04:10:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.101 04:10:29 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:44.101 04:10:29 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.101 04:10:29 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.101 04:10:29 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.101 04:10:29 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.101 04:10:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.101 04:10:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.101 04:10:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.101 04:10:29 -- paths/export.sh@5 -- # export PATH 00:04:44.101 04:10:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.101 04:10:29 -- nvmf/common.sh@51 -- # : 0 00:04:44.101 04:10:29 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.101 04:10:29 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.101 04:10:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.101 04:10:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.101 04:10:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.101 04:10:29 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.101 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.101 04:10:29 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.101 04:10:29 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.101 04:10:29 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.101 04:10:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:44.101 04:10:29 -- spdk/autotest.sh@32 -- # uname -s 00:04:44.101 04:10:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:44.101 04:10:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:44.101 04:10:29 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:44.101 04:10:29 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:44.101 04:10:29 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:44.101 04:10:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:44.361 04:10:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:44.361 04:10:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:44.361 04:10:29 -- spdk/autotest.sh@48 -- # udevadm_pid=66554 00:04:44.361 04:10:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:44.361 04:10:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:44.361 04:10:29 -- pm/common@17 -- # local monitor 00:04:44.361 04:10:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:44.361 04:10:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:44.361 04:10:29 -- pm/common@25 -- # sleep 1 00:04:44.361 04:10:29 -- pm/common@21 -- # date +%s 00:04:44.361 04:10:29 -- pm/common@21 -- # date +%s 00:04:44.361 04:10:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731816629 00:04:44.361 04:10:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731816629 00:04:44.361 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731816629_collect-cpu-load.pm.log 00:04:44.361 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731816629_collect-vmstat.pm.log 00:04:45.302 04:10:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:45.302 04:10:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:45.302 04:10:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.302 04:10:30 -- common/autotest_common.sh@10 -- # set +x 00:04:45.302 04:10:30 -- spdk/autotest.sh@59 -- # create_test_list 00:04:45.302 04:10:30 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:45.302 04:10:30 -- common/autotest_common.sh@10 -- # set +x 00:04:45.302 04:10:30 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:45.302 04:10:30 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:45.302 04:10:30 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:45.302 04:10:30 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:45.302 04:10:30 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:45.302 04:10:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:45.302 04:10:30 -- common/autotest_common.sh@1457 -- # uname 00:04:45.302 04:10:30 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:45.302 04:10:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:45.302 04:10:30 -- common/autotest_common.sh@1477 -- # uname 00:04:45.302 04:10:30 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:45.302 04:10:30 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:45.302 04:10:30 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:45.302 lcov: LCOV version 1.15 00:04:45.302 04:10:30 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:57.589 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:57.589 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:15.714 04:10:58 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:15.714 04:10:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.715 04:10:58 -- common/autotest_common.sh@10 -- # set +x 00:05:15.715 04:10:58 -- spdk/autotest.sh@78 -- # rm -f 00:05:15.715 04:10:58 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.715 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:15.715 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:15.715 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:15.715 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:15.715 04:10:59 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:15.715 04:10:59 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:15.715 04:10:59 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:15.715 04:10:59 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:15.715 04:10:59 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:15.715 04:10:59 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:15.715 04:10:59 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:15.715 04:10:59 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:15.715 04:10:59 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:15.715 04:10:59 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:15.715 04:10:59 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:15.715 04:10:59 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:15.715 04:10:59 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:15.715 04:10:59 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:15.715 04:10:59 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:15.715 04:10:59 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:15.715 04:10:59 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:15.715 04:10:59 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:15.715 04:10:59 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:15.715 04:10:59 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:15.715 04:10:59 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:15.715 04:10:59 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:15.715 04:10:59 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:15.715 04:10:59 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:15.715 04:10:59 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:15.715 04:10:59 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:05:15.715 04:10:59 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:05:15.715 04:10:59 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:05:15.715 04:10:59 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:15.715 04:10:59 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:15.715 04:10:59 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:15.715 04:10:59 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:15.715 04:10:59 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:15.715 04:10:59 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:15.715 04:10:59 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:15.715 04:10:59 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:15.715 04:10:59 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:15.715 04:10:59 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:15.715 04:10:59 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:15.715 04:10:59 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:15.715 04:10:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.715 04:10:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.715 04:10:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:15.715 04:10:59 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:15.715 04:10:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:15.715 No valid GPT data, bailing 00:05:15.715 04:10:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:15.715 04:10:59 -- scripts/common.sh@394 -- # pt= 00:05:15.715 04:10:59 -- scripts/common.sh@395 -- # return 1 00:05:15.715 04:10:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:15.715 1+0 records in 00:05:15.715 1+0 records out 00:05:15.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292203 s, 35.9 MB/s 00:05:15.715 04:10:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.715 04:10:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.715 04:10:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:15.715 04:10:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:15.715 04:10:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:15.715 No valid GPT data, bailing 00:05:15.715 04:10:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:15.715 04:10:59 -- scripts/common.sh@394 -- # pt= 00:05:15.715 04:10:59 -- scripts/common.sh@395 -- # return 1 00:05:15.715 04:10:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:15.715 1+0 records in 00:05:15.715 1+0 records out 00:05:15.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534759 s, 196 MB/s 00:05:15.715 04:10:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.715 04:10:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.715 04:10:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:15.715 04:10:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:15.715 04:10:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:15.715 No valid GPT data, bailing 00:05:15.715 04:10:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:15.715 04:10:59 -- scripts/common.sh@394 -- # pt= 00:05:15.715 04:10:59 -- scripts/common.sh@395 -- # return 1 00:05:15.715 04:10:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:15.715 1+0 records in 00:05:15.715 1+0 records out 00:05:15.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00588889 s, 178 MB/s 00:05:15.715 04:10:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.715 04:10:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.715 04:10:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:15.715 04:10:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:15.715 04:10:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:15.715 No valid GPT data, bailing 00:05:15.715 04:10:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:15.715 04:10:59 -- scripts/common.sh@394 -- # pt= 00:05:15.715 04:10:59 -- scripts/common.sh@395 -- # return 1 00:05:15.715 04:10:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:15.715 1+0 records in 00:05:15.715 1+0 records out 00:05:15.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00546221 s, 192 MB/s 00:05:15.715 04:10:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.715 04:10:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.715 04:10:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:15.715 04:10:59 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:15.715 04:10:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:15.715 No valid GPT data, bailing 00:05:15.715 04:10:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:15.715 04:10:59 -- scripts/common.sh@394 -- # pt= 00:05:15.715 04:10:59 -- scripts/common.sh@395 -- # return 1 00:05:15.715 04:10:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:15.715 1+0 records in 00:05:15.715 1+0 records out 00:05:15.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00577088 s, 182 MB/s 00:05:15.715 04:10:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.715 04:10:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.715 04:10:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:15.715 04:10:59 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:15.715 04:10:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:15.715 No valid GPT data, bailing 00:05:15.715 04:10:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:15.715 04:10:59 -- scripts/common.sh@394 -- # pt= 00:05:15.715 04:10:59 -- scripts/common.sh@395 -- # return 1 00:05:15.715 04:10:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:15.715 1+0 records in 00:05:15.715 1+0 records out 00:05:15.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00571917 s, 183 MB/s 00:05:15.715 04:10:59 -- spdk/autotest.sh@105 -- # sync 00:05:15.715 04:10:59 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:15.716 04:10:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:15.716 04:10:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:15.977 04:11:01 -- spdk/autotest.sh@111 -- # uname -s 00:05:15.977 04:11:01 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:15.977 04:11:01 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:15.977 04:11:01 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:16.551 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.122 Hugepages 00:05:17.122 node hugesize free / total 00:05:17.122 node0 1048576kB 0 / 0 00:05:17.123 node0 2048kB 0 / 0 00:05:17.123 00:05:17.123 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:17.123 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:17.123 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:17.123 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:17.123 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:17.440 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:17.440 04:11:02 -- spdk/autotest.sh@117 -- # uname -s 00:05:17.440 04:11:02 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:17.440 04:11:02 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:17.440 04:11:02 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.700 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.271 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.271 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.271 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.532 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.532 04:11:04 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:19.474 04:11:05 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:19.474 04:11:05 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:19.474 04:11:05 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:19.474 04:11:05 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:19.474 04:11:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:19.474 04:11:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:19.474 04:11:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:19.474 04:11:05 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:19.474 04:11:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:19.474 04:11:05 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:19.474 04:11:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:19.474 04:11:05 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.046 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.046 Waiting for block devices as requested 00:05:20.046 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.046 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.307 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.307 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.587 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:25.587 04:11:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.587 04:11:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:25.587 04:11:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:25.587 04:11:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:25.587 04:11:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:25.587 04:11:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:25.587 04:11:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:25.587 04:11:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:25.587 04:11:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:25.587 04:11:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:25.587 04:11:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.587 04:11:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.587 04:11:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:25.587 04:11:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.587 04:11:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.587 04:11:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.587 04:11:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.587 04:11:11 -- common/autotest_common.sh@1543 -- # continue 00:05:25.587 04:11:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.587 04:11:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:25.587 04:11:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:25.587 04:11:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:25.587 04:11:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:25.587 04:11:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:25.587 04:11:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:25.587 04:11:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:25.587 04:11:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:25.587 04:11:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:25.587 04:11:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.587 04:11:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.587 04:11:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.587 04:11:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:25.587 04:11:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.587 04:11:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.587 04:11:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.587 04:11:11 -- common/autotest_common.sh@1543 -- # continue 00:05:25.587 04:11:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.587 04:11:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:25.587 04:11:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:25.587 04:11:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:25.587 04:11:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:25.587 04:11:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:25.587 04:11:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:25.587 04:11:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:25.587 04:11:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:25.587 04:11:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:25.587 04:11:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.587 04:11:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.587 04:11:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:25.587 04:11:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.587 04:11:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.587 04:11:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.587 04:11:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.587 04:11:11 -- common/autotest_common.sh@1543 -- # continue 00:05:25.587 04:11:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.587 04:11:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:25.587 04:11:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:25.587 04:11:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:25.587 04:11:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:25.587 04:11:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:25.587 04:11:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:25.587 04:11:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:25.587 04:11:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:25.587 04:11:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.587 04:11:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:25.587 04:11:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.587 04:11:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.587 04:11:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.588 04:11:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:25.588 04:11:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.588 04:11:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.588 04:11:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.588 04:11:11 -- common/autotest_common.sh@1543 -- # continue 00:05:25.588 04:11:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:25.588 04:11:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.588 04:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:25.588 04:11:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:25.588 04:11:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.588 04:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:25.588 04:11:11 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.846 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.412 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.413 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.413 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.413 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.413 04:11:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:26.413 04:11:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.413 04:11:12 -- common/autotest_common.sh@10 -- # set +x 00:05:26.413 04:11:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:26.413 04:11:12 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:26.413 04:11:12 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:26.413 04:11:12 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:26.413 04:11:12 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:26.413 04:11:12 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:26.413 04:11:12 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:26.413 04:11:12 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:26.413 04:11:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:26.413 04:11:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:26.413 04:11:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.413 04:11:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:26.413 04:11:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:26.674 04:11:12 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:26.674 04:11:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:26.674 04:11:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:26.674 04:11:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:26.674 04:11:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:26.674 04:11:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:26.674 04:11:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:26.674 04:11:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:26.674 04:11:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:26.674 04:11:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:26.674 04:11:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:26.674 04:11:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:26.674 04:11:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:26.674 04:11:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:26.674 04:11:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:26.674 04:11:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:26.674 04:11:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:26.674 04:11:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:26.674 04:11:12 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:26.674 04:11:12 -- common/autotest_common.sh@1572 -- # return 0 00:05:26.674 04:11:12 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:26.674 04:11:12 -- common/autotest_common.sh@1580 -- # return 0 00:05:26.674 04:11:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:26.674 04:11:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:26.674 04:11:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:26.674 04:11:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:26.674 04:11:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:26.674 04:11:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.674 04:11:12 -- common/autotest_common.sh@10 -- # set +x 00:05:26.674 04:11:12 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:26.674 04:11:12 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:26.674 04:11:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.674 04:11:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.674 04:11:12 -- common/autotest_common.sh@10 -- # set +x 00:05:26.674 ************************************ 00:05:26.674 START TEST env 00:05:26.674 ************************************ 00:05:26.674 04:11:12 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:26.674 * Looking for test storage... 00:05:26.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:26.674 04:11:12 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.674 04:11:12 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.674 04:11:12 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.674 04:11:12 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.674 04:11:12 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.674 04:11:12 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.674 04:11:12 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.674 04:11:12 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.674 04:11:12 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.674 04:11:12 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.674 04:11:12 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.674 04:11:12 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.674 04:11:12 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.674 04:11:12 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.674 04:11:12 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.674 04:11:12 env -- scripts/common.sh@344 -- # case "$op" in 00:05:26.674 04:11:12 env -- scripts/common.sh@345 -- # : 1 00:05:26.674 04:11:12 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.674 04:11:12 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.674 04:11:12 env -- scripts/common.sh@365 -- # decimal 1 00:05:26.674 04:11:12 env -- scripts/common.sh@353 -- # local d=1 00:05:26.674 04:11:12 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.674 04:11:12 env -- scripts/common.sh@355 -- # echo 1 00:05:26.674 04:11:12 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.674 04:11:12 env -- scripts/common.sh@366 -- # decimal 2 00:05:26.674 04:11:12 env -- scripts/common.sh@353 -- # local d=2 00:05:26.674 04:11:12 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.674 04:11:12 env -- scripts/common.sh@355 -- # echo 2 00:05:26.674 04:11:12 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.674 04:11:12 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.674 04:11:12 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.674 04:11:12 env -- scripts/common.sh@368 -- # return 0 00:05:26.674 04:11:12 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.674 04:11:12 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.674 --rc genhtml_branch_coverage=1 00:05:26.674 --rc genhtml_function_coverage=1 00:05:26.674 --rc genhtml_legend=1 00:05:26.674 --rc geninfo_all_blocks=1 00:05:26.674 --rc geninfo_unexecuted_blocks=1 00:05:26.674 00:05:26.674 ' 00:05:26.674 04:11:12 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.674 --rc genhtml_branch_coverage=1 00:05:26.674 --rc genhtml_function_coverage=1 00:05:26.674 --rc genhtml_legend=1 00:05:26.674 --rc geninfo_all_blocks=1 00:05:26.674 --rc geninfo_unexecuted_blocks=1 00:05:26.674 00:05:26.674 ' 00:05:26.674 04:11:12 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.674 --rc genhtml_branch_coverage=1 00:05:26.674 --rc genhtml_function_coverage=1 00:05:26.674 --rc genhtml_legend=1 00:05:26.674 --rc geninfo_all_blocks=1 00:05:26.674 --rc geninfo_unexecuted_blocks=1 00:05:26.674 00:05:26.674 ' 00:05:26.674 04:11:12 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.674 --rc genhtml_branch_coverage=1 00:05:26.674 --rc genhtml_function_coverage=1 00:05:26.674 --rc genhtml_legend=1 00:05:26.674 --rc geninfo_all_blocks=1 00:05:26.674 --rc geninfo_unexecuted_blocks=1 00:05:26.674 00:05:26.674 ' 00:05:26.674 04:11:12 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:26.674 04:11:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.674 04:11:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.675 04:11:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.675 ************************************ 00:05:26.675 START TEST env_memory 00:05:26.675 ************************************ 00:05:26.675 04:11:12 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:26.675 00:05:26.675 00:05:26.675 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.675 http://cunit.sourceforge.net/ 00:05:26.675 00:05:26.675 00:05:26.675 Suite: memory 00:05:26.934 Test: alloc and free memory map ...[2024-11-17 04:11:12.406595] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:26.934 passed 00:05:26.934 Test: mem map translation ...[2024-11-17 04:11:12.445652] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:26.934 [2024-11-17 04:11:12.445699] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:26.934 [2024-11-17 04:11:12.445759] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:26.934 [2024-11-17 04:11:12.445773] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:26.934 passed 00:05:26.934 Test: mem map registration ...[2024-11-17 04:11:12.513803] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:26.934 [2024-11-17 04:11:12.513844] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:26.934 passed 00:05:26.934 Test: mem map adjacent registrations ...passed 00:05:26.934 00:05:26.934 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.934 suites 1 1 n/a 0 0 00:05:26.934 tests 4 4 4 0 0 00:05:26.934 asserts 152 152 152 0 n/a 00:05:26.934 00:05:26.934 Elapsed time = 0.233 seconds 00:05:26.934 00:05:26.934 real 0m0.271s 00:05:26.934 user 0m0.234s 00:05:26.934 sys 0m0.028s 00:05:26.934 ************************************ 00:05:26.934 04:11:12 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.934 04:11:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:26.934 END TEST env_memory 00:05:26.934 ************************************ 00:05:27.193 04:11:12 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:27.193 04:11:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.193 04:11:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.193 04:11:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.193 ************************************ 00:05:27.193 START TEST env_vtophys 00:05:27.193 ************************************ 00:05:27.194 04:11:12 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:27.194 EAL: lib.eal log level changed from notice to debug 00:05:27.194 EAL: Detected lcore 0 as core 0 on socket 0 00:05:27.194 EAL: Detected lcore 1 as core 0 on socket 0 00:05:27.194 EAL: Detected lcore 2 as core 0 on socket 0 00:05:27.194 EAL: Detected lcore 3 as core 0 on socket 0 00:05:27.194 EAL: Detected lcore 4 as core 0 on socket 0 00:05:27.194 EAL: Detected lcore 5 as core 0 on socket 0 00:05:27.194 EAL: Detected lcore 6 as core 0 on socket 0 00:05:27.194 EAL: Detected lcore 7 as core 0 on socket 0 00:05:27.194 EAL: Detected lcore 8 as core 0 on socket 0 00:05:27.194 EAL: Detected lcore 9 as core 0 on socket 0 00:05:27.194 EAL: Maximum logical cores by configuration: 128 00:05:27.194 EAL: Detected CPU lcores: 10 00:05:27.194 EAL: Detected NUMA nodes: 1 00:05:27.194 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:27.194 EAL: Detected shared linkage of DPDK 00:05:27.194 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:27.194 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:27.194 EAL: Registered [vdev] bus. 00:05:27.194 EAL: bus.vdev log level changed from disabled to notice 00:05:27.194 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:27.194 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:27.194 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:27.194 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:27.194 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:27.194 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:27.194 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:27.194 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:27.194 EAL: No shared files mode enabled, IPC will be disabled 00:05:27.194 EAL: No shared files mode enabled, IPC is disabled 00:05:27.194 EAL: Selected IOVA mode 'PA' 00:05:27.194 EAL: Probing VFIO support... 00:05:27.194 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:27.194 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:27.194 EAL: Ask a virtual area of 0x2e000 bytes 00:05:27.194 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:27.194 EAL: Setting up physically contiguous memory... 00:05:27.194 EAL: Setting maximum number of open files to 524288 00:05:27.194 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:27.194 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:27.194 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.194 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:27.194 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.194 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.194 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:27.194 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:27.194 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.194 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:27.194 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.194 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.194 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:27.194 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:27.194 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.194 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:27.194 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.194 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.194 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:27.194 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:27.194 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.194 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:27.194 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.194 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.194 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:27.194 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:27.194 EAL: Hugepages will be freed exactly as allocated. 00:05:27.194 EAL: No shared files mode enabled, IPC is disabled 00:05:27.194 EAL: No shared files mode enabled, IPC is disabled 00:05:27.194 EAL: TSC frequency is ~2600000 KHz 00:05:27.194 EAL: Main lcore 0 is ready (tid=7fb695847a40;cpuset=[0]) 00:05:27.194 EAL: Trying to obtain current memory policy. 00:05:27.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.194 EAL: Restoring previous memory policy: 0 00:05:27.194 EAL: request: mp_malloc_sync 00:05:27.194 EAL: No shared files mode enabled, IPC is disabled 00:05:27.194 EAL: Heap on socket 0 was expanded by 2MB 00:05:27.194 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:27.194 EAL: No shared files mode enabled, IPC is disabled 00:05:27.194 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:27.194 EAL: Mem event callback 'spdk:(nil)' registered 00:05:27.194 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:27.194 00:05:27.194 00:05:27.194 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.194 http://cunit.sourceforge.net/ 00:05:27.194 00:05:27.194 00:05:27.194 Suite: components_suite 00:05:27.764 Test: vtophys_malloc_test ...passed 00:05:27.764 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:27.764 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.764 EAL: Restoring previous memory policy: 4 00:05:27.764 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.764 EAL: request: mp_malloc_sync 00:05:27.764 EAL: No shared files mode enabled, IPC is disabled 00:05:27.764 EAL: Heap on socket 0 was expanded by 4MB 00:05:27.764 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.764 EAL: request: mp_malloc_sync 00:05:27.764 EAL: No shared files mode enabled, IPC is disabled 00:05:27.764 EAL: Heap on socket 0 was shrunk by 4MB 00:05:27.764 EAL: Trying to obtain current memory policy. 00:05:27.764 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.764 EAL: Restoring previous memory policy: 4 00:05:27.764 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.764 EAL: request: mp_malloc_sync 00:05:27.764 EAL: No shared files mode enabled, IPC is disabled 00:05:27.764 EAL: Heap on socket 0 was expanded by 6MB 00:05:27.764 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.764 EAL: request: mp_malloc_sync 00:05:27.764 EAL: No shared files mode enabled, IPC is disabled 00:05:27.764 EAL: Heap on socket 0 was shrunk by 6MB 00:05:27.764 EAL: Trying to obtain current memory policy. 00:05:27.764 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.764 EAL: Restoring previous memory policy: 4 00:05:27.764 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.764 EAL: request: mp_malloc_sync 00:05:27.764 EAL: No shared files mode enabled, IPC is disabled 00:05:27.764 EAL: Heap on socket 0 was expanded by 10MB 00:05:27.764 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.764 EAL: request: mp_malloc_sync 00:05:27.764 EAL: No shared files mode enabled, IPC is disabled 00:05:27.764 EAL: Heap on socket 0 was shrunk by 10MB 00:05:27.764 EAL: Trying to obtain current memory policy. 00:05:27.764 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.764 EAL: Restoring previous memory policy: 4 00:05:27.764 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.764 EAL: request: mp_malloc_sync 00:05:27.764 EAL: No shared files mode enabled, IPC is disabled 00:05:27.764 EAL: Heap on socket 0 was expanded by 18MB 00:05:27.764 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.764 EAL: request: mp_malloc_sync 00:05:27.764 EAL: No shared files mode enabled, IPC is disabled 00:05:27.764 EAL: Heap on socket 0 was shrunk by 18MB 00:05:27.764 EAL: Trying to obtain current memory policy. 00:05:27.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.765 EAL: Restoring previous memory policy: 4 00:05:27.765 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.765 EAL: request: mp_malloc_sync 00:05:27.765 EAL: No shared files mode enabled, IPC is disabled 00:05:27.765 EAL: Heap on socket 0 was expanded by 34MB 00:05:27.765 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.765 EAL: request: mp_malloc_sync 00:05:27.765 EAL: No shared files mode enabled, IPC is disabled 00:05:27.765 EAL: Heap on socket 0 was shrunk by 34MB 00:05:27.765 EAL: Trying to obtain current memory policy. 00:05:27.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.765 EAL: Restoring previous memory policy: 4 00:05:27.765 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.765 EAL: request: mp_malloc_sync 00:05:27.765 EAL: No shared files mode enabled, IPC is disabled 00:05:27.765 EAL: Heap on socket 0 was expanded by 66MB 00:05:27.765 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.765 EAL: request: mp_malloc_sync 00:05:27.765 EAL: No shared files mode enabled, IPC is disabled 00:05:27.765 EAL: Heap on socket 0 was shrunk by 66MB 00:05:27.765 EAL: Trying to obtain current memory policy. 00:05:27.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.765 EAL: Restoring previous memory policy: 4 00:05:27.765 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.765 EAL: request: mp_malloc_sync 00:05:27.765 EAL: No shared files mode enabled, IPC is disabled 00:05:27.765 EAL: Heap on socket 0 was expanded by 130MB 00:05:27.765 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.765 EAL: request: mp_malloc_sync 00:05:27.765 EAL: No shared files mode enabled, IPC is disabled 00:05:27.765 EAL: Heap on socket 0 was shrunk by 130MB 00:05:27.765 EAL: Trying to obtain current memory policy. 00:05:27.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.765 EAL: Restoring previous memory policy: 4 00:05:27.765 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.765 EAL: request: mp_malloc_sync 00:05:27.765 EAL: No shared files mode enabled, IPC is disabled 00:05:27.765 EAL: Heap on socket 0 was expanded by 258MB 00:05:27.765 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.765 EAL: request: mp_malloc_sync 00:05:27.765 EAL: No shared files mode enabled, IPC is disabled 00:05:27.765 EAL: Heap on socket 0 was shrunk by 258MB 00:05:27.765 EAL: Trying to obtain current memory policy. 00:05:27.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.765 EAL: Restoring previous memory policy: 4 00:05:27.765 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.765 EAL: request: mp_malloc_sync 00:05:27.765 EAL: No shared files mode enabled, IPC is disabled 00:05:27.765 EAL: Heap on socket 0 was expanded by 514MB 00:05:28.024 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.024 EAL: request: mp_malloc_sync 00:05:28.024 EAL: No shared files mode enabled, IPC is disabled 00:05:28.024 EAL: Heap on socket 0 was shrunk by 514MB 00:05:28.024 EAL: Trying to obtain current memory policy. 00:05:28.024 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.024 EAL: Restoring previous memory policy: 4 00:05:28.024 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.024 EAL: request: mp_malloc_sync 00:05:28.024 EAL: No shared files mode enabled, IPC is disabled 00:05:28.024 EAL: Heap on socket 0 was expanded by 1026MB 00:05:28.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.284 passed 00:05:28.284 00:05:28.284 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.284 suites 1 1 n/a 0 0 00:05:28.284 tests 2 2 2 0 0 00:05:28.284 asserts 5358 5358 5358 0 n/a 00:05:28.284 00:05:28.284 Elapsed time = 1.013 seconds 00:05:28.284 EAL: request: mp_malloc_sync 00:05:28.284 EAL: No shared files mode enabled, IPC is disabled 00:05:28.284 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:28.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.284 EAL: request: mp_malloc_sync 00:05:28.284 EAL: No shared files mode enabled, IPC is disabled 00:05:28.284 EAL: Heap on socket 0 was shrunk by 2MB 00:05:28.284 EAL: No shared files mode enabled, IPC is disabled 00:05:28.284 EAL: No shared files mode enabled, IPC is disabled 00:05:28.284 EAL: No shared files mode enabled, IPC is disabled 00:05:28.284 00:05:28.284 real 0m1.253s 00:05:28.284 user 0m0.500s 00:05:28.284 sys 0m0.617s 00:05:28.284 04:11:13 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.284 04:11:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:28.284 ************************************ 00:05:28.284 END TEST env_vtophys 00:05:28.284 ************************************ 00:05:28.284 04:11:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:28.284 04:11:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.284 04:11:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.285 04:11:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.285 ************************************ 00:05:28.285 START TEST env_pci 00:05:28.285 ************************************ 00:05:28.285 04:11:13 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:28.285 00:05:28.285 00:05:28.285 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.285 http://cunit.sourceforge.net/ 00:05:28.285 00:05:28.285 00:05:28.285 Suite: pci 00:05:28.285 Test: pci_hook ...[2024-11-17 04:11:13.993409] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69261 has claimed it 00:05:28.545 passed 00:05:28.545 00:05:28.545 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.545 suites 1 1 n/a 0 0 00:05:28.545 tests 1 1 1 0 0 00:05:28.545 asserts 25 25 25 0 n/a 00:05:28.545 00:05:28.545 Elapsed time = 0.003 seconds 00:05:28.545 EAL: Cannot find device (10000:00:01.0) 00:05:28.545 EAL: Failed to attach device on primary process 00:05:28.545 00:05:28.545 real 0m0.050s 00:05:28.545 user 0m0.025s 00:05:28.545 sys 0m0.024s 00:05:28.545 04:11:14 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.545 04:11:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:28.545 ************************************ 00:05:28.545 END TEST env_pci 00:05:28.545 ************************************ 00:05:28.545 04:11:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:28.545 04:11:14 env -- env/env.sh@15 -- # uname 00:05:28.545 04:11:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:28.545 04:11:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:28.546 04:11:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.546 04:11:14 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:28.546 04:11:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.546 04:11:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.546 ************************************ 00:05:28.546 START TEST env_dpdk_post_init 00:05:28.546 ************************************ 00:05:28.546 04:11:14 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.546 EAL: Detected CPU lcores: 10 00:05:28.546 EAL: Detected NUMA nodes: 1 00:05:28.546 EAL: Detected shared linkage of DPDK 00:05:28.546 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.546 EAL: Selected IOVA mode 'PA' 00:05:28.546 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.806 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:28.806 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:28.806 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:28.806 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:28.806 Starting DPDK initialization... 00:05:28.806 Starting SPDK post initialization... 00:05:28.806 SPDK NVMe probe 00:05:28.806 Attaching to 0000:00:10.0 00:05:28.806 Attaching to 0000:00:11.0 00:05:28.806 Attaching to 0000:00:12.0 00:05:28.806 Attaching to 0000:00:13.0 00:05:28.806 Attached to 0000:00:13.0 00:05:28.806 Attached to 0000:00:10.0 00:05:28.806 Attached to 0000:00:11.0 00:05:28.806 Attached to 0000:00:12.0 00:05:28.806 Cleaning up... 00:05:28.806 00:05:28.807 real 0m0.230s 00:05:28.807 user 0m0.071s 00:05:28.807 sys 0m0.062s 00:05:28.807 04:11:14 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.807 ************************************ 00:05:28.807 END TEST env_dpdk_post_init 00:05:28.807 ************************************ 00:05:28.807 04:11:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.807 04:11:14 env -- env/env.sh@26 -- # uname 00:05:28.807 04:11:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:28.807 04:11:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.807 04:11:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.807 04:11:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.807 04:11:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.807 ************************************ 00:05:28.807 START TEST env_mem_callbacks 00:05:28.807 ************************************ 00:05:28.807 04:11:14 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.807 EAL: Detected CPU lcores: 10 00:05:28.807 EAL: Detected NUMA nodes: 1 00:05:28.807 EAL: Detected shared linkage of DPDK 00:05:28.807 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.807 EAL: Selected IOVA mode 'PA' 00:05:28.807 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.807 00:05:28.807 00:05:28.807 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.807 http://cunit.sourceforge.net/ 00:05:28.807 00:05:28.807 00:05:28.807 Suite: memory 00:05:28.807 Test: test ... 00:05:28.807 register 0x200000200000 2097152 00:05:28.807 malloc 3145728 00:05:28.807 register 0x200000400000 4194304 00:05:28.807 buf 0x200000500000 len 3145728 PASSED 00:05:28.807 malloc 64 00:05:28.807 buf 0x2000004fff40 len 64 PASSED 00:05:28.807 malloc 4194304 00:05:28.807 register 0x200000800000 6291456 00:05:28.807 buf 0x200000a00000 len 4194304 PASSED 00:05:28.807 free 0x200000500000 3145728 00:05:28.807 free 0x2000004fff40 64 00:05:28.807 unregister 0x200000400000 4194304 PASSED 00:05:28.807 free 0x200000a00000 4194304 00:05:28.807 unregister 0x200000800000 6291456 PASSED 00:05:28.807 malloc 8388608 00:05:28.807 register 0x200000400000 10485760 00:05:28.807 buf 0x200000600000 len 8388608 PASSED 00:05:28.807 free 0x200000600000 8388608 00:05:28.807 unregister 0x200000400000 10485760 PASSED 00:05:29.068 passed 00:05:29.068 00:05:29.068 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.068 suites 1 1 n/a 0 0 00:05:29.068 tests 1 1 1 0 0 00:05:29.068 asserts 15 15 15 0 n/a 00:05:29.068 00:05:29.068 Elapsed time = 0.009 seconds 00:05:29.068 00:05:29.068 real 0m0.179s 00:05:29.068 user 0m0.024s 00:05:29.068 sys 0m0.050s 00:05:29.068 04:11:14 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.068 04:11:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:29.068 ************************************ 00:05:29.068 END TEST env_mem_callbacks 00:05:29.068 ************************************ 00:05:29.068 ************************************ 00:05:29.068 END TEST env 00:05:29.068 ************************************ 00:05:29.068 00:05:29.068 real 0m2.407s 00:05:29.068 user 0m1.024s 00:05:29.068 sys 0m0.979s 00:05:29.068 04:11:14 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.068 04:11:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.068 04:11:14 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:29.068 04:11:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.068 04:11:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.068 04:11:14 -- common/autotest_common.sh@10 -- # set +x 00:05:29.068 ************************************ 00:05:29.068 START TEST rpc 00:05:29.068 ************************************ 00:05:29.068 04:11:14 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:29.068 * Looking for test storage... 00:05:29.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:29.068 04:11:14 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.068 04:11:14 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.068 04:11:14 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.068 04:11:14 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.068 04:11:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.068 04:11:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.068 04:11:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.068 04:11:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.068 04:11:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.068 04:11:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.068 04:11:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.068 04:11:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.068 04:11:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.068 04:11:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.068 04:11:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.068 04:11:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:29.068 04:11:14 rpc -- scripts/common.sh@345 -- # : 1 00:05:29.068 04:11:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.068 04:11:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.068 04:11:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:29.068 04:11:14 rpc -- scripts/common.sh@353 -- # local d=1 00:05:29.068 04:11:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.068 04:11:14 rpc -- scripts/common.sh@355 -- # echo 1 00:05:29.068 04:11:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.068 04:11:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:29.068 04:11:14 rpc -- scripts/common.sh@353 -- # local d=2 00:05:29.329 04:11:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.329 04:11:14 rpc -- scripts/common.sh@355 -- # echo 2 00:05:29.329 04:11:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.329 04:11:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.329 04:11:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.329 04:11:14 rpc -- scripts/common.sh@368 -- # return 0 00:05:29.329 04:11:14 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.329 04:11:14 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.329 --rc genhtml_branch_coverage=1 00:05:29.329 --rc genhtml_function_coverage=1 00:05:29.329 --rc genhtml_legend=1 00:05:29.329 --rc geninfo_all_blocks=1 00:05:29.329 --rc geninfo_unexecuted_blocks=1 00:05:29.329 00:05:29.329 ' 00:05:29.329 04:11:14 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.329 --rc genhtml_branch_coverage=1 00:05:29.329 --rc genhtml_function_coverage=1 00:05:29.329 --rc genhtml_legend=1 00:05:29.329 --rc geninfo_all_blocks=1 00:05:29.329 --rc geninfo_unexecuted_blocks=1 00:05:29.329 00:05:29.329 ' 00:05:29.329 04:11:14 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.329 --rc genhtml_branch_coverage=1 00:05:29.329 --rc genhtml_function_coverage=1 00:05:29.329 --rc genhtml_legend=1 00:05:29.329 --rc geninfo_all_blocks=1 00:05:29.329 --rc geninfo_unexecuted_blocks=1 00:05:29.329 00:05:29.329 ' 00:05:29.329 04:11:14 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.329 --rc genhtml_branch_coverage=1 00:05:29.329 --rc genhtml_function_coverage=1 00:05:29.329 --rc genhtml_legend=1 00:05:29.329 --rc geninfo_all_blocks=1 00:05:29.329 --rc geninfo_unexecuted_blocks=1 00:05:29.329 00:05:29.329 ' 00:05:29.329 04:11:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69382 00:05:29.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.329 04:11:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.329 04:11:14 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:29.329 04:11:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69382 00:05:29.329 04:11:14 rpc -- common/autotest_common.sh@835 -- # '[' -z 69382 ']' 00:05:29.329 04:11:14 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.329 04:11:14 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.329 04:11:14 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.329 04:11:14 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.329 04:11:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.329 [2024-11-17 04:11:14.872080] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:29.329 [2024-11-17 04:11:14.872194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69382 ] 00:05:29.329 [2024-11-17 04:11:15.027507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.329 [2024-11-17 04:11:15.046505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:29.329 [2024-11-17 04:11:15.046554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69382' to capture a snapshot of events at runtime. 00:05:29.329 [2024-11-17 04:11:15.046566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:29.329 [2024-11-17 04:11:15.046574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:29.329 [2024-11-17 04:11:15.046584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69382 for offline analysis/debug. 00:05:29.329 [2024-11-17 04:11:15.046893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.270 04:11:15 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.270 04:11:15 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.270 04:11:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.270 04:11:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.270 04:11:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:30.270 04:11:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:30.270 04:11:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.270 04:11:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.270 04:11:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.270 ************************************ 00:05:30.270 START TEST rpc_integrity 00:05:30.270 ************************************ 00:05:30.270 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:30.270 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.270 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.270 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.270 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.270 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.270 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.270 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.270 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.270 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.270 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.271 { 00:05:30.271 "name": "Malloc0", 00:05:30.271 "aliases": [ 00:05:30.271 "1499361e-8ef5-4aa4-a356-0697fc2ce055" 00:05:30.271 ], 00:05:30.271 "product_name": "Malloc disk", 00:05:30.271 "block_size": 512, 00:05:30.271 "num_blocks": 16384, 00:05:30.271 "uuid": "1499361e-8ef5-4aa4-a356-0697fc2ce055", 00:05:30.271 "assigned_rate_limits": { 00:05:30.271 "rw_ios_per_sec": 0, 00:05:30.271 "rw_mbytes_per_sec": 0, 00:05:30.271 "r_mbytes_per_sec": 0, 00:05:30.271 "w_mbytes_per_sec": 0 00:05:30.271 }, 00:05:30.271 "claimed": false, 00:05:30.271 "zoned": false, 00:05:30.271 "supported_io_types": { 00:05:30.271 "read": true, 00:05:30.271 "write": true, 00:05:30.271 "unmap": true, 00:05:30.271 "flush": true, 00:05:30.271 "reset": true, 00:05:30.271 "nvme_admin": false, 00:05:30.271 "nvme_io": false, 00:05:30.271 "nvme_io_md": false, 00:05:30.271 "write_zeroes": true, 00:05:30.271 "zcopy": true, 00:05:30.271 "get_zone_info": false, 00:05:30.271 "zone_management": false, 00:05:30.271 "zone_append": false, 00:05:30.271 "compare": false, 00:05:30.271 "compare_and_write": false, 00:05:30.271 "abort": true, 00:05:30.271 "seek_hole": false, 00:05:30.271 "seek_data": false, 00:05:30.271 "copy": true, 00:05:30.271 "nvme_iov_md": false 00:05:30.271 }, 00:05:30.271 "memory_domains": [ 00:05:30.271 { 00:05:30.271 "dma_device_id": "system", 00:05:30.271 "dma_device_type": 1 00:05:30.271 }, 00:05:30.271 { 00:05:30.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.271 "dma_device_type": 2 00:05:30.271 } 00:05:30.271 ], 00:05:30.271 "driver_specific": {} 00:05:30.271 } 00:05:30.271 ]' 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.271 [2024-11-17 04:11:15.845057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:30.271 [2024-11-17 04:11:15.845114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.271 [2024-11-17 04:11:15.845141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:30.271 [2024-11-17 04:11:15.845151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.271 [2024-11-17 04:11:15.847425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.271 [2024-11-17 04:11:15.847459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.271 Passthru0 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.271 { 00:05:30.271 "name": "Malloc0", 00:05:30.271 "aliases": [ 00:05:30.271 "1499361e-8ef5-4aa4-a356-0697fc2ce055" 00:05:30.271 ], 00:05:30.271 "product_name": "Malloc disk", 00:05:30.271 "block_size": 512, 00:05:30.271 "num_blocks": 16384, 00:05:30.271 "uuid": "1499361e-8ef5-4aa4-a356-0697fc2ce055", 00:05:30.271 "assigned_rate_limits": { 00:05:30.271 "rw_ios_per_sec": 0, 00:05:30.271 "rw_mbytes_per_sec": 0, 00:05:30.271 "r_mbytes_per_sec": 0, 00:05:30.271 "w_mbytes_per_sec": 0 00:05:30.271 }, 00:05:30.271 "claimed": true, 00:05:30.271 "claim_type": "exclusive_write", 00:05:30.271 "zoned": false, 00:05:30.271 "supported_io_types": { 00:05:30.271 "read": true, 00:05:30.271 "write": true, 00:05:30.271 "unmap": true, 00:05:30.271 "flush": true, 00:05:30.271 "reset": true, 00:05:30.271 "nvme_admin": false, 00:05:30.271 "nvme_io": false, 00:05:30.271 "nvme_io_md": false, 00:05:30.271 "write_zeroes": true, 00:05:30.271 "zcopy": true, 00:05:30.271 "get_zone_info": false, 00:05:30.271 "zone_management": false, 00:05:30.271 "zone_append": false, 00:05:30.271 "compare": false, 00:05:30.271 "compare_and_write": false, 00:05:30.271 "abort": true, 00:05:30.271 "seek_hole": false, 00:05:30.271 "seek_data": false, 00:05:30.271 "copy": true, 00:05:30.271 "nvme_iov_md": false 00:05:30.271 }, 00:05:30.271 "memory_domains": [ 00:05:30.271 { 00:05:30.271 "dma_device_id": "system", 00:05:30.271 "dma_device_type": 1 00:05:30.271 }, 00:05:30.271 { 00:05:30.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.271 "dma_device_type": 2 00:05:30.271 } 00:05:30.271 ], 00:05:30.271 "driver_specific": {} 00:05:30.271 }, 00:05:30.271 { 00:05:30.271 "name": "Passthru0", 00:05:30.271 "aliases": [ 00:05:30.271 "1c640c68-c318-5df2-965e-5129b14ead43" 00:05:30.271 ], 00:05:30.271 "product_name": "passthru", 00:05:30.271 "block_size": 512, 00:05:30.271 "num_blocks": 16384, 00:05:30.271 "uuid": "1c640c68-c318-5df2-965e-5129b14ead43", 00:05:30.271 "assigned_rate_limits": { 00:05:30.271 "rw_ios_per_sec": 0, 00:05:30.271 "rw_mbytes_per_sec": 0, 00:05:30.271 "r_mbytes_per_sec": 0, 00:05:30.271 "w_mbytes_per_sec": 0 00:05:30.271 }, 00:05:30.271 "claimed": false, 00:05:30.271 "zoned": false, 00:05:30.271 "supported_io_types": { 00:05:30.271 "read": true, 00:05:30.271 "write": true, 00:05:30.271 "unmap": true, 00:05:30.271 "flush": true, 00:05:30.271 "reset": true, 00:05:30.271 "nvme_admin": false, 00:05:30.271 "nvme_io": false, 00:05:30.271 "nvme_io_md": false, 00:05:30.271 "write_zeroes": true, 00:05:30.271 "zcopy": true, 00:05:30.271 "get_zone_info": false, 00:05:30.271 "zone_management": false, 00:05:30.271 "zone_append": false, 00:05:30.271 "compare": false, 00:05:30.271 "compare_and_write": false, 00:05:30.271 "abort": true, 00:05:30.271 "seek_hole": false, 00:05:30.271 "seek_data": false, 00:05:30.271 "copy": true, 00:05:30.271 "nvme_iov_md": false 00:05:30.271 }, 00:05:30.271 "memory_domains": [ 00:05:30.271 { 00:05:30.271 "dma_device_id": "system", 00:05:30.271 "dma_device_type": 1 00:05:30.271 }, 00:05:30.271 { 00:05:30.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.271 "dma_device_type": 2 00:05:30.271 } 00:05:30.271 ], 00:05:30.271 "driver_specific": { 00:05:30.271 "passthru": { 00:05:30.271 "name": "Passthru0", 00:05:30.271 "base_bdev_name": "Malloc0" 00:05:30.271 } 00:05:30.271 } 00:05:30.271 } 00:05:30.271 ]' 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:30.271 ************************************ 00:05:30.271 END TEST rpc_integrity 00:05:30.271 ************************************ 00:05:30.271 04:11:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.271 00:05:30.271 real 0m0.224s 00:05:30.271 user 0m0.119s 00:05:30.271 sys 0m0.038s 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.271 04:11:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.271 04:11:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:30.271 04:11:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.271 04:11:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.271 04:11:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.533 ************************************ 00:05:30.533 START TEST rpc_plugins 00:05:30.533 ************************************ 00:05:30.533 04:11:15 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:30.533 04:11:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:30.533 04:11:15 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.533 04:11:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.533 04:11:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.533 04:11:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:30.533 04:11:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:30.533 04:11:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.533 04:11:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.533 04:11:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.533 04:11:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:30.533 { 00:05:30.533 "name": "Malloc1", 00:05:30.533 "aliases": [ 00:05:30.533 "7009d834-a56b-4b23-a66d-91423c143c8c" 00:05:30.533 ], 00:05:30.533 "product_name": "Malloc disk", 00:05:30.533 "block_size": 4096, 00:05:30.533 "num_blocks": 256, 00:05:30.533 "uuid": "7009d834-a56b-4b23-a66d-91423c143c8c", 00:05:30.533 "assigned_rate_limits": { 00:05:30.533 "rw_ios_per_sec": 0, 00:05:30.533 "rw_mbytes_per_sec": 0, 00:05:30.533 "r_mbytes_per_sec": 0, 00:05:30.533 "w_mbytes_per_sec": 0 00:05:30.533 }, 00:05:30.533 "claimed": false, 00:05:30.533 "zoned": false, 00:05:30.533 "supported_io_types": { 00:05:30.533 "read": true, 00:05:30.533 "write": true, 00:05:30.533 "unmap": true, 00:05:30.533 "flush": true, 00:05:30.533 "reset": true, 00:05:30.533 "nvme_admin": false, 00:05:30.533 "nvme_io": false, 00:05:30.533 "nvme_io_md": false, 00:05:30.533 "write_zeroes": true, 00:05:30.533 "zcopy": true, 00:05:30.533 "get_zone_info": false, 00:05:30.533 "zone_management": false, 00:05:30.533 "zone_append": false, 00:05:30.533 "compare": false, 00:05:30.533 "compare_and_write": false, 00:05:30.533 "abort": true, 00:05:30.533 "seek_hole": false, 00:05:30.533 "seek_data": false, 00:05:30.533 "copy": true, 00:05:30.533 "nvme_iov_md": false 00:05:30.533 }, 00:05:30.533 "memory_domains": [ 00:05:30.533 { 00:05:30.533 "dma_device_id": "system", 00:05:30.533 "dma_device_type": 1 00:05:30.533 }, 00:05:30.533 { 00:05:30.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.533 "dma_device_type": 2 00:05:30.533 } 00:05:30.533 ], 00:05:30.533 "driver_specific": {} 00:05:30.533 } 00:05:30.533 ]' 00:05:30.533 04:11:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:30.533 04:11:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:30.533 04:11:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:30.533 04:11:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.533 04:11:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.533 04:11:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.533 04:11:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:30.533 04:11:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.533 04:11:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.533 04:11:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.533 04:11:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:30.533 04:11:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:30.533 ************************************ 00:05:30.533 END TEST rpc_plugins 00:05:30.533 ************************************ 00:05:30.533 04:11:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:30.533 00:05:30.533 real 0m0.116s 00:05:30.533 user 0m0.065s 00:05:30.533 sys 0m0.013s 00:05:30.533 04:11:16 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.533 04:11:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.533 04:11:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:30.533 04:11:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.533 04:11:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.533 04:11:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.533 ************************************ 00:05:30.533 START TEST rpc_trace_cmd_test 00:05:30.533 ************************************ 00:05:30.533 04:11:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:30.533 04:11:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:30.533 04:11:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:30.533 04:11:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.533 04:11:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.533 04:11:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.533 04:11:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:30.533 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69382", 00:05:30.533 "tpoint_group_mask": "0x8", 00:05:30.533 "iscsi_conn": { 00:05:30.533 "mask": "0x2", 00:05:30.533 "tpoint_mask": "0x0" 00:05:30.533 }, 00:05:30.533 "scsi": { 00:05:30.533 "mask": "0x4", 00:05:30.533 "tpoint_mask": "0x0" 00:05:30.533 }, 00:05:30.533 "bdev": { 00:05:30.533 "mask": "0x8", 00:05:30.533 "tpoint_mask": "0xffffffffffffffff" 00:05:30.533 }, 00:05:30.533 "nvmf_rdma": { 00:05:30.533 "mask": "0x10", 00:05:30.533 "tpoint_mask": "0x0" 00:05:30.533 }, 00:05:30.533 "nvmf_tcp": { 00:05:30.533 "mask": "0x20", 00:05:30.533 "tpoint_mask": "0x0" 00:05:30.533 }, 00:05:30.533 "ftl": { 00:05:30.533 "mask": "0x40", 00:05:30.533 "tpoint_mask": "0x0" 00:05:30.533 }, 00:05:30.533 "blobfs": { 00:05:30.533 "mask": "0x80", 00:05:30.533 "tpoint_mask": "0x0" 00:05:30.533 }, 00:05:30.533 "dsa": { 00:05:30.533 "mask": "0x200", 00:05:30.533 "tpoint_mask": "0x0" 00:05:30.533 }, 00:05:30.533 "thread": { 00:05:30.533 "mask": "0x400", 00:05:30.533 "tpoint_mask": "0x0" 00:05:30.533 }, 00:05:30.533 "nvme_pcie": { 00:05:30.534 "mask": "0x800", 00:05:30.534 "tpoint_mask": "0x0" 00:05:30.534 }, 00:05:30.534 "iaa": { 00:05:30.534 "mask": "0x1000", 00:05:30.534 "tpoint_mask": "0x0" 00:05:30.534 }, 00:05:30.534 "nvme_tcp": { 00:05:30.534 "mask": "0x2000", 00:05:30.534 "tpoint_mask": "0x0" 00:05:30.534 }, 00:05:30.534 "bdev_nvme": { 00:05:30.534 "mask": "0x4000", 00:05:30.534 "tpoint_mask": "0x0" 00:05:30.534 }, 00:05:30.534 "sock": { 00:05:30.534 "mask": "0x8000", 00:05:30.534 "tpoint_mask": "0x0" 00:05:30.534 }, 00:05:30.534 "blob": { 00:05:30.534 "mask": "0x10000", 00:05:30.534 "tpoint_mask": "0x0" 00:05:30.534 }, 00:05:30.534 "bdev_raid": { 00:05:30.534 "mask": "0x20000", 00:05:30.534 "tpoint_mask": "0x0" 00:05:30.534 }, 00:05:30.534 "scheduler": { 00:05:30.534 "mask": "0x40000", 00:05:30.534 "tpoint_mask": "0x0" 00:05:30.534 } 00:05:30.534 }' 00:05:30.534 04:11:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:30.534 04:11:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:30.534 04:11:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:30.534 04:11:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:30.534 04:11:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:30.795 04:11:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:30.795 04:11:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:30.795 04:11:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:30.795 04:11:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:30.795 ************************************ 00:05:30.795 END TEST rpc_trace_cmd_test 00:05:30.795 ************************************ 00:05:30.795 04:11:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:30.795 00:05:30.795 real 0m0.180s 00:05:30.795 user 0m0.135s 00:05:30.795 sys 0m0.030s 00:05:30.795 04:11:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.795 04:11:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.795 04:11:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:30.795 04:11:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:30.795 04:11:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:30.795 04:11:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.795 04:11:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.795 04:11:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.795 ************************************ 00:05:30.795 START TEST rpc_daemon_integrity 00:05:30.795 ************************************ 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.795 { 00:05:30.795 "name": "Malloc2", 00:05:30.795 "aliases": [ 00:05:30.795 "5b2504af-00a9-412b-aaf0-e1fee4fefc3b" 00:05:30.795 ], 00:05:30.795 "product_name": "Malloc disk", 00:05:30.795 "block_size": 512, 00:05:30.795 "num_blocks": 16384, 00:05:30.795 "uuid": "5b2504af-00a9-412b-aaf0-e1fee4fefc3b", 00:05:30.795 "assigned_rate_limits": { 00:05:30.795 "rw_ios_per_sec": 0, 00:05:30.795 "rw_mbytes_per_sec": 0, 00:05:30.795 "r_mbytes_per_sec": 0, 00:05:30.795 "w_mbytes_per_sec": 0 00:05:30.795 }, 00:05:30.795 "claimed": false, 00:05:30.795 "zoned": false, 00:05:30.795 "supported_io_types": { 00:05:30.795 "read": true, 00:05:30.795 "write": true, 00:05:30.795 "unmap": true, 00:05:30.795 "flush": true, 00:05:30.795 "reset": true, 00:05:30.795 "nvme_admin": false, 00:05:30.795 "nvme_io": false, 00:05:30.795 "nvme_io_md": false, 00:05:30.795 "write_zeroes": true, 00:05:30.795 "zcopy": true, 00:05:30.795 "get_zone_info": false, 00:05:30.795 "zone_management": false, 00:05:30.795 "zone_append": false, 00:05:30.795 "compare": false, 00:05:30.795 "compare_and_write": false, 00:05:30.795 "abort": true, 00:05:30.795 "seek_hole": false, 00:05:30.795 "seek_data": false, 00:05:30.795 "copy": true, 00:05:30.795 "nvme_iov_md": false 00:05:30.795 }, 00:05:30.795 "memory_domains": [ 00:05:30.795 { 00:05:30.795 "dma_device_id": "system", 00:05:30.795 "dma_device_type": 1 00:05:30.795 }, 00:05:30.795 { 00:05:30.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.795 "dma_device_type": 2 00:05:30.795 } 00:05:30.795 ], 00:05:30.795 "driver_specific": {} 00:05:30.795 } 00:05:30.795 ]' 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.795 [2024-11-17 04:11:16.513516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:30.795 [2024-11-17 04:11:16.513567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.795 [2024-11-17 04:11:16.513594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:30.795 [2024-11-17 04:11:16.513603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.795 [2024-11-17 04:11:16.515791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.795 [2024-11-17 04:11:16.515825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.795 Passthru0 00:05:30.795 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.056 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.056 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.056 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.056 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.056 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.056 { 00:05:31.056 "name": "Malloc2", 00:05:31.056 "aliases": [ 00:05:31.056 "5b2504af-00a9-412b-aaf0-e1fee4fefc3b" 00:05:31.056 ], 00:05:31.056 "product_name": "Malloc disk", 00:05:31.056 "block_size": 512, 00:05:31.056 "num_blocks": 16384, 00:05:31.056 "uuid": "5b2504af-00a9-412b-aaf0-e1fee4fefc3b", 00:05:31.056 "assigned_rate_limits": { 00:05:31.056 "rw_ios_per_sec": 0, 00:05:31.056 "rw_mbytes_per_sec": 0, 00:05:31.056 "r_mbytes_per_sec": 0, 00:05:31.056 "w_mbytes_per_sec": 0 00:05:31.056 }, 00:05:31.056 "claimed": true, 00:05:31.056 "claim_type": "exclusive_write", 00:05:31.056 "zoned": false, 00:05:31.056 "supported_io_types": { 00:05:31.056 "read": true, 00:05:31.056 "write": true, 00:05:31.056 "unmap": true, 00:05:31.056 "flush": true, 00:05:31.056 "reset": true, 00:05:31.056 "nvme_admin": false, 00:05:31.056 "nvme_io": false, 00:05:31.056 "nvme_io_md": false, 00:05:31.056 "write_zeroes": true, 00:05:31.056 "zcopy": true, 00:05:31.056 "get_zone_info": false, 00:05:31.056 "zone_management": false, 00:05:31.056 "zone_append": false, 00:05:31.056 "compare": false, 00:05:31.056 "compare_and_write": false, 00:05:31.056 "abort": true, 00:05:31.056 "seek_hole": false, 00:05:31.056 "seek_data": false, 00:05:31.056 "copy": true, 00:05:31.056 "nvme_iov_md": false 00:05:31.056 }, 00:05:31.057 "memory_domains": [ 00:05:31.057 { 00:05:31.057 "dma_device_id": "system", 00:05:31.057 "dma_device_type": 1 00:05:31.057 }, 00:05:31.057 { 00:05:31.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.057 "dma_device_type": 2 00:05:31.057 } 00:05:31.057 ], 00:05:31.057 "driver_specific": {} 00:05:31.057 }, 00:05:31.057 { 00:05:31.057 "name": "Passthru0", 00:05:31.057 "aliases": [ 00:05:31.057 "ed69dc8b-cc45-5c26-b69f-7513c6ce81f6" 00:05:31.057 ], 00:05:31.057 "product_name": "passthru", 00:05:31.057 "block_size": 512, 00:05:31.057 "num_blocks": 16384, 00:05:31.057 "uuid": "ed69dc8b-cc45-5c26-b69f-7513c6ce81f6", 00:05:31.057 "assigned_rate_limits": { 00:05:31.057 "rw_ios_per_sec": 0, 00:05:31.057 "rw_mbytes_per_sec": 0, 00:05:31.057 "r_mbytes_per_sec": 0, 00:05:31.057 "w_mbytes_per_sec": 0 00:05:31.057 }, 00:05:31.057 "claimed": false, 00:05:31.057 "zoned": false, 00:05:31.057 "supported_io_types": { 00:05:31.057 "read": true, 00:05:31.057 "write": true, 00:05:31.057 "unmap": true, 00:05:31.057 "flush": true, 00:05:31.057 "reset": true, 00:05:31.057 "nvme_admin": false, 00:05:31.057 "nvme_io": false, 00:05:31.057 "nvme_io_md": false, 00:05:31.057 "write_zeroes": true, 00:05:31.057 "zcopy": true, 00:05:31.057 "get_zone_info": false, 00:05:31.057 "zone_management": false, 00:05:31.057 "zone_append": false, 00:05:31.057 "compare": false, 00:05:31.057 "compare_and_write": false, 00:05:31.057 "abort": true, 00:05:31.057 "seek_hole": false, 00:05:31.057 "seek_data": false, 00:05:31.057 "copy": true, 00:05:31.057 "nvme_iov_md": false 00:05:31.057 }, 00:05:31.057 "memory_domains": [ 00:05:31.057 { 00:05:31.057 "dma_device_id": "system", 00:05:31.057 "dma_device_type": 1 00:05:31.057 }, 00:05:31.057 { 00:05:31.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.057 "dma_device_type": 2 00:05:31.057 } 00:05:31.057 ], 00:05:31.057 "driver_specific": { 00:05:31.057 "passthru": { 00:05:31.057 "name": "Passthru0", 00:05:31.057 "base_bdev_name": "Malloc2" 00:05:31.057 } 00:05:31.057 } 00:05:31.057 } 00:05:31.057 ]' 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.057 ************************************ 00:05:31.057 END TEST rpc_daemon_integrity 00:05:31.057 ************************************ 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.057 00:05:31.057 real 0m0.235s 00:05:31.057 user 0m0.136s 00:05:31.057 sys 0m0.032s 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.057 04:11:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.057 04:11:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:31.057 04:11:16 rpc -- rpc/rpc.sh@84 -- # killprocess 69382 00:05:31.057 04:11:16 rpc -- common/autotest_common.sh@954 -- # '[' -z 69382 ']' 00:05:31.057 04:11:16 rpc -- common/autotest_common.sh@958 -- # kill -0 69382 00:05:31.057 04:11:16 rpc -- common/autotest_common.sh@959 -- # uname 00:05:31.057 04:11:16 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.057 04:11:16 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69382 00:05:31.057 killing process with pid 69382 00:05:31.057 04:11:16 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.057 04:11:16 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.057 04:11:16 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69382' 00:05:31.057 04:11:16 rpc -- common/autotest_common.sh@973 -- # kill 69382 00:05:31.057 04:11:16 rpc -- common/autotest_common.sh@978 -- # wait 69382 00:05:31.318 ************************************ 00:05:31.318 END TEST rpc 00:05:31.318 ************************************ 00:05:31.318 00:05:31.318 real 0m2.313s 00:05:31.318 user 0m2.763s 00:05:31.318 sys 0m0.587s 00:05:31.318 04:11:16 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.318 04:11:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.318 04:11:17 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:31.318 04:11:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.318 04:11:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.318 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:05:31.318 ************************************ 00:05:31.318 START TEST skip_rpc 00:05:31.318 ************************************ 00:05:31.318 04:11:17 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:31.580 * Looking for test storage... 00:05:31.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.580 04:11:17 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.580 04:11:17 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.580 04:11:17 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.580 04:11:17 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.580 04:11:17 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:31.580 04:11:17 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.580 04:11:17 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.580 --rc genhtml_branch_coverage=1 00:05:31.580 --rc genhtml_function_coverage=1 00:05:31.580 --rc genhtml_legend=1 00:05:31.580 --rc geninfo_all_blocks=1 00:05:31.580 --rc geninfo_unexecuted_blocks=1 00:05:31.580 00:05:31.580 ' 00:05:31.580 04:11:17 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.580 --rc genhtml_branch_coverage=1 00:05:31.580 --rc genhtml_function_coverage=1 00:05:31.580 --rc genhtml_legend=1 00:05:31.580 --rc geninfo_all_blocks=1 00:05:31.580 --rc geninfo_unexecuted_blocks=1 00:05:31.580 00:05:31.580 ' 00:05:31.580 04:11:17 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.580 --rc genhtml_branch_coverage=1 00:05:31.580 --rc genhtml_function_coverage=1 00:05:31.580 --rc genhtml_legend=1 00:05:31.580 --rc geninfo_all_blocks=1 00:05:31.580 --rc geninfo_unexecuted_blocks=1 00:05:31.580 00:05:31.580 ' 00:05:31.580 04:11:17 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.580 --rc genhtml_branch_coverage=1 00:05:31.580 --rc genhtml_function_coverage=1 00:05:31.580 --rc genhtml_legend=1 00:05:31.580 --rc geninfo_all_blocks=1 00:05:31.580 --rc geninfo_unexecuted_blocks=1 00:05:31.580 00:05:31.580 ' 00:05:31.580 04:11:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:31.580 04:11:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:31.580 04:11:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:31.580 04:11:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.580 04:11:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.580 04:11:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.580 ************************************ 00:05:31.580 START TEST skip_rpc 00:05:31.580 ************************************ 00:05:31.580 04:11:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:31.580 04:11:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69584 00:05:31.580 04:11:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.580 04:11:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:31.580 04:11:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:31.580 [2024-11-17 04:11:17.258659] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:31.580 [2024-11-17 04:11:17.258772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69584 ] 00:05:31.843 [2024-11-17 04:11:17.421311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.843 [2024-11-17 04:11:17.441675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.195 04:11:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:37.195 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:37.195 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:37.195 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:37.195 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.195 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:37.195 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.195 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:37.195 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.195 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.195 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69584 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69584 ']' 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69584 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69584 00:05:37.196 killing process with pid 69584 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69584' 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69584 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69584 00:05:37.196 00:05:37.196 real 0m5.250s 00:05:37.196 user 0m4.892s 00:05:37.196 sys 0m0.255s 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.196 04:11:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.196 ************************************ 00:05:37.196 END TEST skip_rpc 00:05:37.196 ************************************ 00:05:37.196 04:11:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:37.196 04:11:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.196 04:11:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.196 04:11:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.196 ************************************ 00:05:37.196 START TEST skip_rpc_with_json 00:05:37.196 ************************************ 00:05:37.196 04:11:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:37.196 04:11:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:37.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.196 04:11:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69671 00:05:37.196 04:11:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.196 04:11:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69671 00:05:37.196 04:11:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69671 ']' 00:05:37.196 04:11:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.196 04:11:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.196 04:11:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.196 04:11:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.196 04:11:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.196 04:11:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.196 [2024-11-17 04:11:22.561539] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:37.196 [2024-11-17 04:11:22.561652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69671 ] 00:05:37.196 [2024-11-17 04:11:22.720052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.196 [2024-11-17 04:11:22.741439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.769 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.769 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:37.769 04:11:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:37.769 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.769 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.769 [2024-11-17 04:11:23.416127] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:37.769 request: 00:05:37.769 { 00:05:37.769 "trtype": "tcp", 00:05:37.769 "method": "nvmf_get_transports", 00:05:37.769 "req_id": 1 00:05:37.769 } 00:05:37.769 Got JSON-RPC error response 00:05:37.769 response: 00:05:37.769 { 00:05:37.769 "code": -19, 00:05:37.769 "message": "No such device" 00:05:37.769 } 00:05:37.769 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:37.769 04:11:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:37.769 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.769 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.769 [2024-11-17 04:11:23.424240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.769 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.769 04:11:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:37.769 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.769 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.031 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.031 04:11:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:38.031 { 00:05:38.031 "subsystems": [ 00:05:38.031 { 00:05:38.031 "subsystem": "fsdev", 00:05:38.031 "config": [ 00:05:38.031 { 00:05:38.031 "method": "fsdev_set_opts", 00:05:38.031 "params": { 00:05:38.031 "fsdev_io_pool_size": 65535, 00:05:38.031 "fsdev_io_cache_size": 256 00:05:38.031 } 00:05:38.031 } 00:05:38.031 ] 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "subsystem": "keyring", 00:05:38.031 "config": [] 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "subsystem": "iobuf", 00:05:38.031 "config": [ 00:05:38.031 { 00:05:38.031 "method": "iobuf_set_options", 00:05:38.031 "params": { 00:05:38.031 "small_pool_count": 8192, 00:05:38.031 "large_pool_count": 1024, 00:05:38.031 "small_bufsize": 8192, 00:05:38.031 "large_bufsize": 135168, 00:05:38.031 "enable_numa": false 00:05:38.031 } 00:05:38.031 } 00:05:38.031 ] 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "subsystem": "sock", 00:05:38.031 "config": [ 00:05:38.031 { 00:05:38.031 "method": "sock_set_default_impl", 00:05:38.031 "params": { 00:05:38.031 "impl_name": "posix" 00:05:38.031 } 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "method": "sock_impl_set_options", 00:05:38.031 "params": { 00:05:38.031 "impl_name": "ssl", 00:05:38.031 "recv_buf_size": 4096, 00:05:38.031 "send_buf_size": 4096, 00:05:38.031 "enable_recv_pipe": true, 00:05:38.031 "enable_quickack": false, 00:05:38.031 "enable_placement_id": 0, 00:05:38.031 "enable_zerocopy_send_server": true, 00:05:38.031 "enable_zerocopy_send_client": false, 00:05:38.031 "zerocopy_threshold": 0, 00:05:38.031 "tls_version": 0, 00:05:38.031 "enable_ktls": false 00:05:38.031 } 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "method": "sock_impl_set_options", 00:05:38.031 "params": { 00:05:38.031 "impl_name": "posix", 00:05:38.031 "recv_buf_size": 2097152, 00:05:38.031 "send_buf_size": 2097152, 00:05:38.031 "enable_recv_pipe": true, 00:05:38.031 "enable_quickack": false, 00:05:38.031 "enable_placement_id": 0, 00:05:38.031 "enable_zerocopy_send_server": true, 00:05:38.031 "enable_zerocopy_send_client": false, 00:05:38.031 "zerocopy_threshold": 0, 00:05:38.031 "tls_version": 0, 00:05:38.031 "enable_ktls": false 00:05:38.031 } 00:05:38.031 } 00:05:38.031 ] 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "subsystem": "vmd", 00:05:38.031 "config": [] 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "subsystem": "accel", 00:05:38.031 "config": [ 00:05:38.031 { 00:05:38.031 "method": "accel_set_options", 00:05:38.031 "params": { 00:05:38.031 "small_cache_size": 128, 00:05:38.031 "large_cache_size": 16, 00:05:38.031 "task_count": 2048, 00:05:38.031 "sequence_count": 2048, 00:05:38.031 "buf_count": 2048 00:05:38.031 } 00:05:38.031 } 00:05:38.031 ] 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "subsystem": "bdev", 00:05:38.031 "config": [ 00:05:38.031 { 00:05:38.031 "method": "bdev_set_options", 00:05:38.031 "params": { 00:05:38.031 "bdev_io_pool_size": 65535, 00:05:38.031 "bdev_io_cache_size": 256, 00:05:38.031 "bdev_auto_examine": true, 00:05:38.031 "iobuf_small_cache_size": 128, 00:05:38.031 "iobuf_large_cache_size": 16 00:05:38.031 } 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "method": "bdev_raid_set_options", 00:05:38.031 "params": { 00:05:38.031 "process_window_size_kb": 1024, 00:05:38.031 "process_max_bandwidth_mb_sec": 0 00:05:38.031 } 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "method": "bdev_iscsi_set_options", 00:05:38.031 "params": { 00:05:38.031 "timeout_sec": 30 00:05:38.031 } 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "method": "bdev_nvme_set_options", 00:05:38.031 "params": { 00:05:38.031 "action_on_timeout": "none", 00:05:38.031 "timeout_us": 0, 00:05:38.031 "timeout_admin_us": 0, 00:05:38.031 "keep_alive_timeout_ms": 10000, 00:05:38.031 "arbitration_burst": 0, 00:05:38.031 "low_priority_weight": 0, 00:05:38.031 "medium_priority_weight": 0, 00:05:38.031 "high_priority_weight": 0, 00:05:38.031 "nvme_adminq_poll_period_us": 10000, 00:05:38.031 "nvme_ioq_poll_period_us": 0, 00:05:38.031 "io_queue_requests": 0, 00:05:38.031 "delay_cmd_submit": true, 00:05:38.031 "transport_retry_count": 4, 00:05:38.031 "bdev_retry_count": 3, 00:05:38.031 "transport_ack_timeout": 0, 00:05:38.031 "ctrlr_loss_timeout_sec": 0, 00:05:38.031 "reconnect_delay_sec": 0, 00:05:38.031 "fast_io_fail_timeout_sec": 0, 00:05:38.031 "disable_auto_failback": false, 00:05:38.031 "generate_uuids": false, 00:05:38.031 "transport_tos": 0, 00:05:38.031 "nvme_error_stat": false, 00:05:38.031 "rdma_srq_size": 0, 00:05:38.031 "io_path_stat": false, 00:05:38.031 "allow_accel_sequence": false, 00:05:38.031 "rdma_max_cq_size": 0, 00:05:38.031 "rdma_cm_event_timeout_ms": 0, 00:05:38.031 "dhchap_digests": [ 00:05:38.031 "sha256", 00:05:38.031 "sha384", 00:05:38.031 "sha512" 00:05:38.031 ], 00:05:38.031 "dhchap_dhgroups": [ 00:05:38.031 "null", 00:05:38.031 "ffdhe2048", 00:05:38.031 "ffdhe3072", 00:05:38.031 "ffdhe4096", 00:05:38.031 "ffdhe6144", 00:05:38.031 "ffdhe8192" 00:05:38.031 ] 00:05:38.031 } 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "method": "bdev_nvme_set_hotplug", 00:05:38.031 "params": { 00:05:38.031 "period_us": 100000, 00:05:38.031 "enable": false 00:05:38.031 } 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "method": "bdev_wait_for_examine" 00:05:38.031 } 00:05:38.031 ] 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "subsystem": "scsi", 00:05:38.031 "config": null 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "subsystem": "scheduler", 00:05:38.031 "config": [ 00:05:38.031 { 00:05:38.031 "method": "framework_set_scheduler", 00:05:38.031 "params": { 00:05:38.031 "name": "static" 00:05:38.031 } 00:05:38.031 } 00:05:38.031 ] 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "subsystem": "vhost_scsi", 00:05:38.031 "config": [] 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "subsystem": "vhost_blk", 00:05:38.031 "config": [] 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "subsystem": "ublk", 00:05:38.031 "config": [] 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "subsystem": "nbd", 00:05:38.031 "config": [] 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "subsystem": "nvmf", 00:05:38.031 "config": [ 00:05:38.031 { 00:05:38.031 "method": "nvmf_set_config", 00:05:38.031 "params": { 00:05:38.031 "discovery_filter": "match_any", 00:05:38.031 "admin_cmd_passthru": { 00:05:38.031 "identify_ctrlr": false 00:05:38.031 }, 00:05:38.031 "dhchap_digests": [ 00:05:38.031 "sha256", 00:05:38.031 "sha384", 00:05:38.031 "sha512" 00:05:38.031 ], 00:05:38.031 "dhchap_dhgroups": [ 00:05:38.031 "null", 00:05:38.031 "ffdhe2048", 00:05:38.031 "ffdhe3072", 00:05:38.031 "ffdhe4096", 00:05:38.031 "ffdhe6144", 00:05:38.031 "ffdhe8192" 00:05:38.031 ] 00:05:38.031 } 00:05:38.031 }, 00:05:38.031 { 00:05:38.031 "method": "nvmf_set_max_subsystems", 00:05:38.031 "params": { 00:05:38.031 "max_subsystems": 1024 00:05:38.031 } 00:05:38.032 }, 00:05:38.032 { 00:05:38.032 "method": "nvmf_set_crdt", 00:05:38.032 "params": { 00:05:38.032 "crdt1": 0, 00:05:38.032 "crdt2": 0, 00:05:38.032 "crdt3": 0 00:05:38.032 } 00:05:38.032 }, 00:05:38.032 { 00:05:38.032 "method": "nvmf_create_transport", 00:05:38.032 "params": { 00:05:38.032 "trtype": "TCP", 00:05:38.032 "max_queue_depth": 128, 00:05:38.032 "max_io_qpairs_per_ctrlr": 127, 00:05:38.032 "in_capsule_data_size": 4096, 00:05:38.032 "max_io_size": 131072, 00:05:38.032 "io_unit_size": 131072, 00:05:38.032 "max_aq_depth": 128, 00:05:38.032 "num_shared_buffers": 511, 00:05:38.032 "buf_cache_size": 4294967295, 00:05:38.032 "dif_insert_or_strip": false, 00:05:38.032 "zcopy": false, 00:05:38.032 "c2h_success": true, 00:05:38.032 "sock_priority": 0, 00:05:38.032 "abort_timeout_sec": 1, 00:05:38.032 "ack_timeout": 0, 00:05:38.032 "data_wr_pool_size": 0 00:05:38.032 } 00:05:38.032 } 00:05:38.032 ] 00:05:38.032 }, 00:05:38.032 { 00:05:38.032 "subsystem": "iscsi", 00:05:38.032 "config": [ 00:05:38.032 { 00:05:38.032 "method": "iscsi_set_options", 00:05:38.032 "params": { 00:05:38.032 "node_base": "iqn.2016-06.io.spdk", 00:05:38.032 "max_sessions": 128, 00:05:38.032 "max_connections_per_session": 2, 00:05:38.032 "max_queue_depth": 64, 00:05:38.032 "default_time2wait": 2, 00:05:38.032 "default_time2retain": 20, 00:05:38.032 "first_burst_length": 8192, 00:05:38.032 "immediate_data": true, 00:05:38.032 "allow_duplicated_isid": false, 00:05:38.032 "error_recovery_level": 0, 00:05:38.032 "nop_timeout": 60, 00:05:38.032 "nop_in_interval": 30, 00:05:38.032 "disable_chap": false, 00:05:38.032 "require_chap": false, 00:05:38.032 "mutual_chap": false, 00:05:38.032 "chap_group": 0, 00:05:38.032 "max_large_datain_per_connection": 64, 00:05:38.032 "max_r2t_per_connection": 4, 00:05:38.032 "pdu_pool_size": 36864, 00:05:38.032 "immediate_data_pool_size": 16384, 00:05:38.032 "data_out_pool_size": 2048 00:05:38.032 } 00:05:38.032 } 00:05:38.032 ] 00:05:38.032 } 00:05:38.032 ] 00:05:38.032 } 00:05:38.032 04:11:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:38.032 04:11:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69671 00:05:38.032 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69671 ']' 00:05:38.032 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69671 00:05:38.032 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:38.032 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.032 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69671 00:05:38.032 killing process with pid 69671 00:05:38.032 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.032 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.032 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69671' 00:05:38.032 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69671 00:05:38.032 04:11:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69671 00:05:38.293 04:11:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69694 00:05:38.293 04:11:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:38.293 04:11:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:43.574 04:11:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69694 00:05:43.574 04:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69694 ']' 00:05:43.574 04:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69694 00:05:43.574 04:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:43.574 04:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.574 04:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69694 00:05:43.574 04:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.574 04:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.574 04:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69694' 00:05:43.574 killing process with pid 69694 00:05:43.574 04:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69694 00:05:43.574 04:11:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69694 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:43.574 00:05:43.574 real 0m6.667s 00:05:43.574 user 0m6.215s 00:05:43.574 sys 0m0.700s 00:05:43.574 ************************************ 00:05:43.574 END TEST skip_rpc_with_json 00:05:43.574 ************************************ 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:43.574 04:11:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:43.574 04:11:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.574 04:11:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.574 04:11:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.574 ************************************ 00:05:43.574 START TEST skip_rpc_with_delay 00:05:43.574 ************************************ 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:43.574 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.574 [2024-11-17 04:11:29.260835] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:43.836 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:43.836 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.836 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:43.836 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.836 00:05:43.836 real 0m0.123s 00:05:43.836 user 0m0.068s 00:05:43.836 sys 0m0.054s 00:05:43.836 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.836 04:11:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:43.836 ************************************ 00:05:43.836 END TEST skip_rpc_with_delay 00:05:43.836 ************************************ 00:05:43.836 04:11:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:43.836 04:11:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:43.836 04:11:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:43.836 04:11:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.836 04:11:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.836 04:11:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.836 ************************************ 00:05:43.836 START TEST exit_on_failed_rpc_init 00:05:43.836 ************************************ 00:05:43.836 04:11:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:43.836 04:11:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69806 00:05:43.836 04:11:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69806 00:05:43.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.836 04:11:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69806 ']' 00:05:43.836 04:11:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.836 04:11:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.836 04:11:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.836 04:11:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.836 04:11:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.836 04:11:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.836 [2024-11-17 04:11:29.414358] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:43.836 [2024-11-17 04:11:29.414461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69806 ] 00:05:43.836 [2024-11-17 04:11:29.552577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.094 [2024-11-17 04:11:29.569126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.659 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.659 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:44.659 04:11:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.659 04:11:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.659 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:44.659 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.659 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.659 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.659 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.659 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.659 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.659 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.659 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.659 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:44.660 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.660 [2024-11-17 04:11:30.333955] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:44.660 [2024-11-17 04:11:30.334069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69824 ] 00:05:44.919 [2024-11-17 04:11:30.483060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.919 [2024-11-17 04:11:30.501208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.919 [2024-11-17 04:11:30.501294] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:44.919 [2024-11-17 04:11:30.501308] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:44.919 [2024-11-17 04:11:30.501319] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69806 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69806 ']' 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69806 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69806 00:05:44.919 killing process with pid 69806 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69806' 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69806 00:05:44.919 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69806 00:05:45.178 00:05:45.178 real 0m1.463s 00:05:45.178 user 0m1.631s 00:05:45.178 sys 0m0.343s 00:05:45.178 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.178 ************************************ 00:05:45.178 END TEST exit_on_failed_rpc_init 00:05:45.178 ************************************ 00:05:45.178 04:11:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.178 04:11:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:45.178 ************************************ 00:05:45.178 END TEST skip_rpc 00:05:45.178 ************************************ 00:05:45.178 00:05:45.178 real 0m13.831s 00:05:45.178 user 0m12.967s 00:05:45.178 sys 0m1.498s 00:05:45.178 04:11:30 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.178 04:11:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.178 04:11:30 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:45.178 04:11:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.178 04:11:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.178 04:11:30 -- common/autotest_common.sh@10 -- # set +x 00:05:45.178 ************************************ 00:05:45.178 START TEST rpc_client 00:05:45.178 ************************************ 00:05:45.178 04:11:30 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:45.436 * Looking for test storage... 00:05:45.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:45.436 04:11:30 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.436 04:11:30 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.436 04:11:30 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.436 04:11:31 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.436 04:11:31 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:45.436 04:11:31 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.436 04:11:31 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.436 --rc genhtml_branch_coverage=1 00:05:45.436 --rc genhtml_function_coverage=1 00:05:45.436 --rc genhtml_legend=1 00:05:45.436 --rc geninfo_all_blocks=1 00:05:45.436 --rc geninfo_unexecuted_blocks=1 00:05:45.436 00:05:45.436 ' 00:05:45.436 04:11:31 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.436 --rc genhtml_branch_coverage=1 00:05:45.436 --rc genhtml_function_coverage=1 00:05:45.436 --rc genhtml_legend=1 00:05:45.436 --rc geninfo_all_blocks=1 00:05:45.436 --rc geninfo_unexecuted_blocks=1 00:05:45.436 00:05:45.436 ' 00:05:45.436 04:11:31 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.436 --rc genhtml_branch_coverage=1 00:05:45.436 --rc genhtml_function_coverage=1 00:05:45.436 --rc genhtml_legend=1 00:05:45.436 --rc geninfo_all_blocks=1 00:05:45.436 --rc geninfo_unexecuted_blocks=1 00:05:45.436 00:05:45.436 ' 00:05:45.436 04:11:31 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.436 --rc genhtml_branch_coverage=1 00:05:45.436 --rc genhtml_function_coverage=1 00:05:45.436 --rc genhtml_legend=1 00:05:45.436 --rc geninfo_all_blocks=1 00:05:45.436 --rc geninfo_unexecuted_blocks=1 00:05:45.436 00:05:45.436 ' 00:05:45.436 04:11:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:45.436 OK 00:05:45.436 04:11:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:45.436 00:05:45.436 real 0m0.171s 00:05:45.436 user 0m0.097s 00:05:45.436 sys 0m0.082s 00:05:45.436 04:11:31 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.436 04:11:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:45.436 ************************************ 00:05:45.436 END TEST rpc_client 00:05:45.436 ************************************ 00:05:45.436 04:11:31 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:45.436 04:11:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.436 04:11:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.436 04:11:31 -- common/autotest_common.sh@10 -- # set +x 00:05:45.436 ************************************ 00:05:45.436 START TEST json_config 00:05:45.436 ************************************ 00:05:45.436 04:11:31 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:45.436 04:11:31 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.436 04:11:31 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.436 04:11:31 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.696 04:11:31 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.696 04:11:31 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.696 04:11:31 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.696 04:11:31 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.696 04:11:31 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.696 04:11:31 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.696 04:11:31 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.696 04:11:31 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.696 04:11:31 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.696 04:11:31 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.696 04:11:31 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.696 04:11:31 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.696 04:11:31 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:45.696 04:11:31 json_config -- scripts/common.sh@345 -- # : 1 00:05:45.696 04:11:31 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.696 04:11:31 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.696 04:11:31 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:45.696 04:11:31 json_config -- scripts/common.sh@353 -- # local d=1 00:05:45.696 04:11:31 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.696 04:11:31 json_config -- scripts/common.sh@355 -- # echo 1 00:05:45.696 04:11:31 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.696 04:11:31 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:45.696 04:11:31 json_config -- scripts/common.sh@353 -- # local d=2 00:05:45.696 04:11:31 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.696 04:11:31 json_config -- scripts/common.sh@355 -- # echo 2 00:05:45.696 04:11:31 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.696 04:11:31 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.696 04:11:31 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.696 04:11:31 json_config -- scripts/common.sh@368 -- # return 0 00:05:45.696 04:11:31 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.696 04:11:31 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.696 --rc genhtml_branch_coverage=1 00:05:45.696 --rc genhtml_function_coverage=1 00:05:45.696 --rc genhtml_legend=1 00:05:45.696 --rc geninfo_all_blocks=1 00:05:45.696 --rc geninfo_unexecuted_blocks=1 00:05:45.696 00:05:45.696 ' 00:05:45.696 04:11:31 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.696 --rc genhtml_branch_coverage=1 00:05:45.696 --rc genhtml_function_coverage=1 00:05:45.696 --rc genhtml_legend=1 00:05:45.696 --rc geninfo_all_blocks=1 00:05:45.696 --rc geninfo_unexecuted_blocks=1 00:05:45.696 00:05:45.696 ' 00:05:45.696 04:11:31 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.696 --rc genhtml_branch_coverage=1 00:05:45.696 --rc genhtml_function_coverage=1 00:05:45.696 --rc genhtml_legend=1 00:05:45.696 --rc geninfo_all_blocks=1 00:05:45.696 --rc geninfo_unexecuted_blocks=1 00:05:45.696 00:05:45.696 ' 00:05:45.696 04:11:31 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.696 --rc genhtml_branch_coverage=1 00:05:45.696 --rc genhtml_function_coverage=1 00:05:45.696 --rc genhtml_legend=1 00:05:45.696 --rc geninfo_all_blocks=1 00:05:45.696 --rc geninfo_unexecuted_blocks=1 00:05:45.696 00:05:45.696 ' 00:05:45.696 04:11:31 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:707b909b-d9e3-4a2c-b9ec-709ea86a88f1 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=707b909b-d9e3-4a2c-b9ec-709ea86a88f1 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.696 04:11:31 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:45.696 04:11:31 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.696 04:11:31 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.696 04:11:31 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.697 04:11:31 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.697 04:11:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.697 04:11:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.697 04:11:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.697 04:11:31 json_config -- paths/export.sh@5 -- # export PATH 00:05:45.697 04:11:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.697 04:11:31 json_config -- nvmf/common.sh@51 -- # : 0 00:05:45.697 04:11:31 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:45.697 04:11:31 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:45.697 04:11:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.697 04:11:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.697 04:11:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.697 04:11:31 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:45.697 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:45.697 04:11:31 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:45.697 04:11:31 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:45.697 04:11:31 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:45.697 04:11:31 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:45.697 04:11:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:45.697 04:11:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:45.697 04:11:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:45.697 04:11:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:45.697 WARNING: No tests are enabled so not running JSON configuration tests 00:05:45.697 04:11:31 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:45.697 04:11:31 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:45.697 00:05:45.697 real 0m0.126s 00:05:45.697 user 0m0.080s 00:05:45.697 sys 0m0.050s 00:05:45.697 04:11:31 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.697 04:11:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.697 ************************************ 00:05:45.697 END TEST json_config 00:05:45.697 ************************************ 00:05:45.697 04:11:31 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:45.697 04:11:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.697 04:11:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.697 04:11:31 -- common/autotest_common.sh@10 -- # set +x 00:05:45.697 ************************************ 00:05:45.697 START TEST json_config_extra_key 00:05:45.697 ************************************ 00:05:45.697 04:11:31 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:45.697 04:11:31 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.697 04:11:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.697 04:11:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.697 04:11:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:45.697 04:11:31 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.697 04:11:31 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.697 --rc genhtml_branch_coverage=1 00:05:45.697 --rc genhtml_function_coverage=1 00:05:45.697 --rc genhtml_legend=1 00:05:45.697 --rc geninfo_all_blocks=1 00:05:45.697 --rc geninfo_unexecuted_blocks=1 00:05:45.697 00:05:45.697 ' 00:05:45.697 04:11:31 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.697 --rc genhtml_branch_coverage=1 00:05:45.697 --rc genhtml_function_coverage=1 00:05:45.697 --rc genhtml_legend=1 00:05:45.697 --rc geninfo_all_blocks=1 00:05:45.697 --rc geninfo_unexecuted_blocks=1 00:05:45.697 00:05:45.697 ' 00:05:45.697 04:11:31 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.697 --rc genhtml_branch_coverage=1 00:05:45.697 --rc genhtml_function_coverage=1 00:05:45.697 --rc genhtml_legend=1 00:05:45.697 --rc geninfo_all_blocks=1 00:05:45.697 --rc geninfo_unexecuted_blocks=1 00:05:45.697 00:05:45.697 ' 00:05:45.697 04:11:31 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.697 --rc genhtml_branch_coverage=1 00:05:45.697 --rc genhtml_function_coverage=1 00:05:45.697 --rc genhtml_legend=1 00:05:45.697 --rc geninfo_all_blocks=1 00:05:45.697 --rc geninfo_unexecuted_blocks=1 00:05:45.697 00:05:45.697 ' 00:05:45.697 04:11:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:707b909b-d9e3-4a2c-b9ec-709ea86a88f1 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=707b909b-d9e3-4a2c-b9ec-709ea86a88f1 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.697 04:11:31 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.697 04:11:31 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.698 04:11:31 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.698 04:11:31 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.698 04:11:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.698 04:11:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.698 04:11:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.698 04:11:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:45.698 04:11:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.698 04:11:31 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:45.698 04:11:31 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:45.698 04:11:31 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:45.698 04:11:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.698 04:11:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.698 04:11:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.698 04:11:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:45.698 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:45.698 04:11:31 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:45.698 04:11:31 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:45.698 04:11:31 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:45.698 04:11:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:45.698 04:11:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:45.698 04:11:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:45.698 04:11:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:45.698 04:11:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:45.698 04:11:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:45.698 04:11:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:45.698 04:11:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:45.698 04:11:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:45.698 04:11:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:45.698 04:11:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:45.698 INFO: launching applications... 00:05:45.698 04:11:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:45.698 04:11:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:45.698 04:11:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:45.698 04:11:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:45.698 04:11:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:45.698 04:11:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:45.698 04:11:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.698 04:11:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.698 04:11:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70001 00:05:45.698 04:11:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:45.698 Waiting for target to run... 00:05:45.698 04:11:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70001 /var/tmp/spdk_tgt.sock 00:05:45.698 04:11:31 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 70001 ']' 00:05:45.698 04:11:31 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.698 04:11:31 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.698 04:11:31 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.698 04:11:31 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.698 04:11:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:45.698 04:11:31 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:45.957 [2024-11-17 04:11:31.470862] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:45.957 [2024-11-17 04:11:31.471325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70001 ] 00:05:46.215 [2024-11-17 04:11:31.774319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.215 [2024-11-17 04:11:31.785283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.781 00:05:46.781 INFO: shutting down applications... 00:05:46.781 04:11:32 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.781 04:11:32 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:46.781 04:11:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:46.781 04:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:46.781 04:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:46.781 04:11:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:46.781 04:11:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:46.781 04:11:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70001 ]] 00:05:46.781 04:11:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70001 00:05:46.781 04:11:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:46.781 04:11:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.782 04:11:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70001 00:05:46.782 04:11:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.348 04:11:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.348 04:11:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.348 04:11:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70001 00:05:47.348 04:11:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:47.348 04:11:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:47.348 04:11:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:47.348 04:11:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:47.348 SPDK target shutdown done 00:05:47.348 04:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:47.348 Success 00:05:47.348 00:05:47.348 real 0m1.554s 00:05:47.348 user 0m1.271s 00:05:47.348 sys 0m0.318s 00:05:47.348 04:11:32 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.348 04:11:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:47.348 ************************************ 00:05:47.348 END TEST json_config_extra_key 00:05:47.348 ************************************ 00:05:47.348 04:11:32 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.348 04:11:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.348 04:11:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.348 04:11:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.348 ************************************ 00:05:47.348 START TEST alias_rpc 00:05:47.348 ************************************ 00:05:47.348 04:11:32 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.348 * Looking for test storage... 00:05:47.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:47.348 04:11:32 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.348 04:11:32 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.348 04:11:32 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.348 04:11:32 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.348 04:11:32 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:47.349 04:11:32 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.349 04:11:32 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.349 --rc genhtml_branch_coverage=1 00:05:47.349 --rc genhtml_function_coverage=1 00:05:47.349 --rc genhtml_legend=1 00:05:47.349 --rc geninfo_all_blocks=1 00:05:47.349 --rc geninfo_unexecuted_blocks=1 00:05:47.349 00:05:47.349 ' 00:05:47.349 04:11:32 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.349 --rc genhtml_branch_coverage=1 00:05:47.349 --rc genhtml_function_coverage=1 00:05:47.349 --rc genhtml_legend=1 00:05:47.349 --rc geninfo_all_blocks=1 00:05:47.349 --rc geninfo_unexecuted_blocks=1 00:05:47.349 00:05:47.349 ' 00:05:47.349 04:11:32 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.349 --rc genhtml_branch_coverage=1 00:05:47.349 --rc genhtml_function_coverage=1 00:05:47.349 --rc genhtml_legend=1 00:05:47.349 --rc geninfo_all_blocks=1 00:05:47.349 --rc geninfo_unexecuted_blocks=1 00:05:47.349 00:05:47.349 ' 00:05:47.349 04:11:32 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.349 --rc genhtml_branch_coverage=1 00:05:47.349 --rc genhtml_function_coverage=1 00:05:47.349 --rc genhtml_legend=1 00:05:47.349 --rc geninfo_all_blocks=1 00:05:47.349 --rc geninfo_unexecuted_blocks=1 00:05:47.349 00:05:47.349 ' 00:05:47.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.349 04:11:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:47.349 04:11:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70074 00:05:47.349 04:11:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70074 00:05:47.349 04:11:32 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 70074 ']' 00:05:47.349 04:11:32 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.349 04:11:32 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.349 04:11:32 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.349 04:11:32 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.349 04:11:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.349 04:11:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.349 [2024-11-17 04:11:33.059612] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:47.349 [2024-11-17 04:11:33.059737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70074 ] 00:05:47.609 [2024-11-17 04:11:33.213747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.609 [2024-11-17 04:11:33.232253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.177 04:11:33 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.177 04:11:33 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:48.177 04:11:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:48.435 04:11:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70074 00:05:48.435 04:11:34 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 70074 ']' 00:05:48.435 04:11:34 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 70074 00:05:48.435 04:11:34 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:48.435 04:11:34 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.435 04:11:34 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70074 00:05:48.435 killing process with pid 70074 00:05:48.435 04:11:34 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.435 04:11:34 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.435 04:11:34 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70074' 00:05:48.435 04:11:34 alias_rpc -- common/autotest_common.sh@973 -- # kill 70074 00:05:48.435 04:11:34 alias_rpc -- common/autotest_common.sh@978 -- # wait 70074 00:05:48.694 ************************************ 00:05:48.694 END TEST alias_rpc 00:05:48.694 ************************************ 00:05:48.694 00:05:48.694 real 0m1.534s 00:05:48.694 user 0m1.688s 00:05:48.694 sys 0m0.355s 00:05:48.694 04:11:34 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.694 04:11:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.694 04:11:34 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:48.694 04:11:34 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:48.694 04:11:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.694 04:11:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.694 04:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:48.952 ************************************ 00:05:48.952 START TEST spdkcli_tcp 00:05:48.952 ************************************ 00:05:48.952 04:11:34 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:48.952 * Looking for test storage... 00:05:48.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:48.952 04:11:34 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:48.952 04:11:34 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.952 04:11:34 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:48.952 04:11:34 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.952 04:11:34 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.952 04:11:34 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.952 04:11:34 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.952 04:11:34 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.952 04:11:34 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.952 04:11:34 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.952 04:11:34 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.952 04:11:34 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.952 04:11:34 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.952 04:11:34 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.952 04:11:34 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.952 04:11:34 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:48.952 04:11:34 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:48.952 04:11:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.953 04:11:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.953 04:11:34 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:48.953 04:11:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:48.953 04:11:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.953 04:11:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:48.953 04:11:34 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.953 04:11:34 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:48.953 04:11:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:48.953 04:11:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.953 04:11:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:48.953 04:11:34 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.953 04:11:34 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.953 04:11:34 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.953 04:11:34 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:48.953 04:11:34 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.953 04:11:34 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.953 --rc genhtml_branch_coverage=1 00:05:48.953 --rc genhtml_function_coverage=1 00:05:48.953 --rc genhtml_legend=1 00:05:48.953 --rc geninfo_all_blocks=1 00:05:48.953 --rc geninfo_unexecuted_blocks=1 00:05:48.953 00:05:48.953 ' 00:05:48.953 04:11:34 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.953 --rc genhtml_branch_coverage=1 00:05:48.953 --rc genhtml_function_coverage=1 00:05:48.953 --rc genhtml_legend=1 00:05:48.953 --rc geninfo_all_blocks=1 00:05:48.953 --rc geninfo_unexecuted_blocks=1 00:05:48.953 00:05:48.953 ' 00:05:48.953 04:11:34 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.953 --rc genhtml_branch_coverage=1 00:05:48.953 --rc genhtml_function_coverage=1 00:05:48.953 --rc genhtml_legend=1 00:05:48.953 --rc geninfo_all_blocks=1 00:05:48.953 --rc geninfo_unexecuted_blocks=1 00:05:48.953 00:05:48.953 ' 00:05:48.953 04:11:34 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.953 --rc genhtml_branch_coverage=1 00:05:48.953 --rc genhtml_function_coverage=1 00:05:48.953 --rc genhtml_legend=1 00:05:48.953 --rc geninfo_all_blocks=1 00:05:48.953 --rc geninfo_unexecuted_blocks=1 00:05:48.953 00:05:48.953 ' 00:05:48.953 04:11:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:48.953 04:11:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:48.953 04:11:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:48.953 04:11:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:48.953 04:11:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:48.953 04:11:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:48.953 04:11:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:48.953 04:11:34 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.953 04:11:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:48.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.953 04:11:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70154 00:05:48.953 04:11:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70154 00:05:48.953 04:11:34 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 70154 ']' 00:05:48.953 04:11:34 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.953 04:11:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:48.953 04:11:34 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.953 04:11:34 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.953 04:11:34 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.953 04:11:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:48.953 [2024-11-17 04:11:34.644094] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:48.953 [2024-11-17 04:11:34.644355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70154 ] 00:05:49.211 [2024-11-17 04:11:34.802581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.211 [2024-11-17 04:11:34.821980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.211 [2024-11-17 04:11:34.822086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.778 04:11:35 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.778 04:11:35 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:49.778 04:11:35 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70171 00:05:49.778 04:11:35 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:49.778 04:11:35 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:50.036 [ 00:05:50.036 "bdev_malloc_delete", 00:05:50.036 "bdev_malloc_create", 00:05:50.036 "bdev_null_resize", 00:05:50.036 "bdev_null_delete", 00:05:50.036 "bdev_null_create", 00:05:50.036 "bdev_nvme_cuse_unregister", 00:05:50.036 "bdev_nvme_cuse_register", 00:05:50.036 "bdev_opal_new_user", 00:05:50.036 "bdev_opal_set_lock_state", 00:05:50.036 "bdev_opal_delete", 00:05:50.036 "bdev_opal_get_info", 00:05:50.036 "bdev_opal_create", 00:05:50.036 "bdev_nvme_opal_revert", 00:05:50.036 "bdev_nvme_opal_init", 00:05:50.036 "bdev_nvme_send_cmd", 00:05:50.036 "bdev_nvme_set_keys", 00:05:50.036 "bdev_nvme_get_path_iostat", 00:05:50.036 "bdev_nvme_get_mdns_discovery_info", 00:05:50.036 "bdev_nvme_stop_mdns_discovery", 00:05:50.036 "bdev_nvme_start_mdns_discovery", 00:05:50.036 "bdev_nvme_set_multipath_policy", 00:05:50.036 "bdev_nvme_set_preferred_path", 00:05:50.036 "bdev_nvme_get_io_paths", 00:05:50.036 "bdev_nvme_remove_error_injection", 00:05:50.036 "bdev_nvme_add_error_injection", 00:05:50.036 "bdev_nvme_get_discovery_info", 00:05:50.036 "bdev_nvme_stop_discovery", 00:05:50.036 "bdev_nvme_start_discovery", 00:05:50.036 "bdev_nvme_get_controller_health_info", 00:05:50.036 "bdev_nvme_disable_controller", 00:05:50.036 "bdev_nvme_enable_controller", 00:05:50.036 "bdev_nvme_reset_controller", 00:05:50.036 "bdev_nvme_get_transport_statistics", 00:05:50.036 "bdev_nvme_apply_firmware", 00:05:50.036 "bdev_nvme_detach_controller", 00:05:50.036 "bdev_nvme_get_controllers", 00:05:50.036 "bdev_nvme_attach_controller", 00:05:50.036 "bdev_nvme_set_hotplug", 00:05:50.036 "bdev_nvme_set_options", 00:05:50.036 "bdev_passthru_delete", 00:05:50.036 "bdev_passthru_create", 00:05:50.036 "bdev_lvol_set_parent_bdev", 00:05:50.036 "bdev_lvol_set_parent", 00:05:50.036 "bdev_lvol_check_shallow_copy", 00:05:50.036 "bdev_lvol_start_shallow_copy", 00:05:50.037 "bdev_lvol_grow_lvstore", 00:05:50.037 "bdev_lvol_get_lvols", 00:05:50.037 "bdev_lvol_get_lvstores", 00:05:50.037 "bdev_lvol_delete", 00:05:50.037 "bdev_lvol_set_read_only", 00:05:50.037 "bdev_lvol_resize", 00:05:50.037 "bdev_lvol_decouple_parent", 00:05:50.037 "bdev_lvol_inflate", 00:05:50.037 "bdev_lvol_rename", 00:05:50.037 "bdev_lvol_clone_bdev", 00:05:50.037 "bdev_lvol_clone", 00:05:50.037 "bdev_lvol_snapshot", 00:05:50.037 "bdev_lvol_create", 00:05:50.037 "bdev_lvol_delete_lvstore", 00:05:50.037 "bdev_lvol_rename_lvstore", 00:05:50.037 "bdev_lvol_create_lvstore", 00:05:50.037 "bdev_raid_set_options", 00:05:50.037 "bdev_raid_remove_base_bdev", 00:05:50.037 "bdev_raid_add_base_bdev", 00:05:50.037 "bdev_raid_delete", 00:05:50.037 "bdev_raid_create", 00:05:50.037 "bdev_raid_get_bdevs", 00:05:50.037 "bdev_error_inject_error", 00:05:50.037 "bdev_error_delete", 00:05:50.037 "bdev_error_create", 00:05:50.037 "bdev_split_delete", 00:05:50.037 "bdev_split_create", 00:05:50.037 "bdev_delay_delete", 00:05:50.037 "bdev_delay_create", 00:05:50.037 "bdev_delay_update_latency", 00:05:50.037 "bdev_zone_block_delete", 00:05:50.037 "bdev_zone_block_create", 00:05:50.037 "blobfs_create", 00:05:50.037 "blobfs_detect", 00:05:50.037 "blobfs_set_cache_size", 00:05:50.037 "bdev_xnvme_delete", 00:05:50.037 "bdev_xnvme_create", 00:05:50.037 "bdev_aio_delete", 00:05:50.037 "bdev_aio_rescan", 00:05:50.037 "bdev_aio_create", 00:05:50.037 "bdev_ftl_set_property", 00:05:50.037 "bdev_ftl_get_properties", 00:05:50.037 "bdev_ftl_get_stats", 00:05:50.037 "bdev_ftl_unmap", 00:05:50.037 "bdev_ftl_unload", 00:05:50.037 "bdev_ftl_delete", 00:05:50.037 "bdev_ftl_load", 00:05:50.037 "bdev_ftl_create", 00:05:50.037 "bdev_virtio_attach_controller", 00:05:50.037 "bdev_virtio_scsi_get_devices", 00:05:50.037 "bdev_virtio_detach_controller", 00:05:50.037 "bdev_virtio_blk_set_hotplug", 00:05:50.037 "bdev_iscsi_delete", 00:05:50.037 "bdev_iscsi_create", 00:05:50.037 "bdev_iscsi_set_options", 00:05:50.037 "accel_error_inject_error", 00:05:50.037 "ioat_scan_accel_module", 00:05:50.037 "dsa_scan_accel_module", 00:05:50.037 "iaa_scan_accel_module", 00:05:50.037 "keyring_file_remove_key", 00:05:50.037 "keyring_file_add_key", 00:05:50.037 "keyring_linux_set_options", 00:05:50.037 "fsdev_aio_delete", 00:05:50.037 "fsdev_aio_create", 00:05:50.037 "iscsi_get_histogram", 00:05:50.037 "iscsi_enable_histogram", 00:05:50.037 "iscsi_set_options", 00:05:50.037 "iscsi_get_auth_groups", 00:05:50.037 "iscsi_auth_group_remove_secret", 00:05:50.037 "iscsi_auth_group_add_secret", 00:05:50.037 "iscsi_delete_auth_group", 00:05:50.037 "iscsi_create_auth_group", 00:05:50.037 "iscsi_set_discovery_auth", 00:05:50.037 "iscsi_get_options", 00:05:50.037 "iscsi_target_node_request_logout", 00:05:50.037 "iscsi_target_node_set_redirect", 00:05:50.037 "iscsi_target_node_set_auth", 00:05:50.037 "iscsi_target_node_add_lun", 00:05:50.037 "iscsi_get_stats", 00:05:50.037 "iscsi_get_connections", 00:05:50.037 "iscsi_portal_group_set_auth", 00:05:50.037 "iscsi_start_portal_group", 00:05:50.037 "iscsi_delete_portal_group", 00:05:50.037 "iscsi_create_portal_group", 00:05:50.037 "iscsi_get_portal_groups", 00:05:50.037 "iscsi_delete_target_node", 00:05:50.037 "iscsi_target_node_remove_pg_ig_maps", 00:05:50.037 "iscsi_target_node_add_pg_ig_maps", 00:05:50.037 "iscsi_create_target_node", 00:05:50.037 "iscsi_get_target_nodes", 00:05:50.037 "iscsi_delete_initiator_group", 00:05:50.037 "iscsi_initiator_group_remove_initiators", 00:05:50.037 "iscsi_initiator_group_add_initiators", 00:05:50.037 "iscsi_create_initiator_group", 00:05:50.037 "iscsi_get_initiator_groups", 00:05:50.037 "nvmf_set_crdt", 00:05:50.037 "nvmf_set_config", 00:05:50.037 "nvmf_set_max_subsystems", 00:05:50.037 "nvmf_stop_mdns_prr", 00:05:50.037 "nvmf_publish_mdns_prr", 00:05:50.037 "nvmf_subsystem_get_listeners", 00:05:50.037 "nvmf_subsystem_get_qpairs", 00:05:50.037 "nvmf_subsystem_get_controllers", 00:05:50.037 "nvmf_get_stats", 00:05:50.037 "nvmf_get_transports", 00:05:50.037 "nvmf_create_transport", 00:05:50.037 "nvmf_get_targets", 00:05:50.037 "nvmf_delete_target", 00:05:50.037 "nvmf_create_target", 00:05:50.037 "nvmf_subsystem_allow_any_host", 00:05:50.037 "nvmf_subsystem_set_keys", 00:05:50.037 "nvmf_subsystem_remove_host", 00:05:50.037 "nvmf_subsystem_add_host", 00:05:50.037 "nvmf_ns_remove_host", 00:05:50.037 "nvmf_ns_add_host", 00:05:50.037 "nvmf_subsystem_remove_ns", 00:05:50.037 "nvmf_subsystem_set_ns_ana_group", 00:05:50.037 "nvmf_subsystem_add_ns", 00:05:50.037 "nvmf_subsystem_listener_set_ana_state", 00:05:50.037 "nvmf_discovery_get_referrals", 00:05:50.037 "nvmf_discovery_remove_referral", 00:05:50.037 "nvmf_discovery_add_referral", 00:05:50.037 "nvmf_subsystem_remove_listener", 00:05:50.037 "nvmf_subsystem_add_listener", 00:05:50.037 "nvmf_delete_subsystem", 00:05:50.037 "nvmf_create_subsystem", 00:05:50.037 "nvmf_get_subsystems", 00:05:50.037 "env_dpdk_get_mem_stats", 00:05:50.037 "nbd_get_disks", 00:05:50.037 "nbd_stop_disk", 00:05:50.037 "nbd_start_disk", 00:05:50.037 "ublk_recover_disk", 00:05:50.037 "ublk_get_disks", 00:05:50.037 "ublk_stop_disk", 00:05:50.037 "ublk_start_disk", 00:05:50.037 "ublk_destroy_target", 00:05:50.037 "ublk_create_target", 00:05:50.037 "virtio_blk_create_transport", 00:05:50.037 "virtio_blk_get_transports", 00:05:50.037 "vhost_controller_set_coalescing", 00:05:50.037 "vhost_get_controllers", 00:05:50.037 "vhost_delete_controller", 00:05:50.037 "vhost_create_blk_controller", 00:05:50.037 "vhost_scsi_controller_remove_target", 00:05:50.037 "vhost_scsi_controller_add_target", 00:05:50.037 "vhost_start_scsi_controller", 00:05:50.037 "vhost_create_scsi_controller", 00:05:50.037 "thread_set_cpumask", 00:05:50.037 "scheduler_set_options", 00:05:50.037 "framework_get_governor", 00:05:50.037 "framework_get_scheduler", 00:05:50.037 "framework_set_scheduler", 00:05:50.037 "framework_get_reactors", 00:05:50.037 "thread_get_io_channels", 00:05:50.037 "thread_get_pollers", 00:05:50.037 "thread_get_stats", 00:05:50.037 "framework_monitor_context_switch", 00:05:50.037 "spdk_kill_instance", 00:05:50.037 "log_enable_timestamps", 00:05:50.037 "log_get_flags", 00:05:50.037 "log_clear_flag", 00:05:50.037 "log_set_flag", 00:05:50.037 "log_get_level", 00:05:50.037 "log_set_level", 00:05:50.037 "log_get_print_level", 00:05:50.037 "log_set_print_level", 00:05:50.037 "framework_enable_cpumask_locks", 00:05:50.037 "framework_disable_cpumask_locks", 00:05:50.037 "framework_wait_init", 00:05:50.037 "framework_start_init", 00:05:50.037 "scsi_get_devices", 00:05:50.037 "bdev_get_histogram", 00:05:50.037 "bdev_enable_histogram", 00:05:50.037 "bdev_set_qos_limit", 00:05:50.037 "bdev_set_qd_sampling_period", 00:05:50.037 "bdev_get_bdevs", 00:05:50.037 "bdev_reset_iostat", 00:05:50.037 "bdev_get_iostat", 00:05:50.037 "bdev_examine", 00:05:50.037 "bdev_wait_for_examine", 00:05:50.037 "bdev_set_options", 00:05:50.037 "accel_get_stats", 00:05:50.037 "accel_set_options", 00:05:50.037 "accel_set_driver", 00:05:50.037 "accel_crypto_key_destroy", 00:05:50.037 "accel_crypto_keys_get", 00:05:50.037 "accel_crypto_key_create", 00:05:50.037 "accel_assign_opc", 00:05:50.037 "accel_get_module_info", 00:05:50.037 "accel_get_opc_assignments", 00:05:50.037 "vmd_rescan", 00:05:50.037 "vmd_remove_device", 00:05:50.037 "vmd_enable", 00:05:50.037 "sock_get_default_impl", 00:05:50.037 "sock_set_default_impl", 00:05:50.037 "sock_impl_set_options", 00:05:50.037 "sock_impl_get_options", 00:05:50.037 "iobuf_get_stats", 00:05:50.037 "iobuf_set_options", 00:05:50.037 "keyring_get_keys", 00:05:50.037 "framework_get_pci_devices", 00:05:50.037 "framework_get_config", 00:05:50.037 "framework_get_subsystems", 00:05:50.037 "fsdev_set_opts", 00:05:50.037 "fsdev_get_opts", 00:05:50.037 "trace_get_info", 00:05:50.037 "trace_get_tpoint_group_mask", 00:05:50.037 "trace_disable_tpoint_group", 00:05:50.037 "trace_enable_tpoint_group", 00:05:50.037 "trace_clear_tpoint_mask", 00:05:50.037 "trace_set_tpoint_mask", 00:05:50.037 "notify_get_notifications", 00:05:50.037 "notify_get_types", 00:05:50.037 "spdk_get_version", 00:05:50.037 "rpc_get_methods" 00:05:50.037 ] 00:05:50.037 04:11:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:50.037 04:11:35 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:50.037 04:11:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.037 04:11:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:50.037 04:11:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70154 00:05:50.037 04:11:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 70154 ']' 00:05:50.037 04:11:35 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 70154 00:05:50.037 04:11:35 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:50.037 04:11:35 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.038 04:11:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70154 00:05:50.038 killing process with pid 70154 00:05:50.038 04:11:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.038 04:11:35 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.038 04:11:35 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70154' 00:05:50.038 04:11:35 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 70154 00:05:50.038 04:11:35 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 70154 00:05:50.296 ************************************ 00:05:50.296 END TEST spdkcli_tcp 00:05:50.296 ************************************ 00:05:50.296 00:05:50.296 real 0m1.515s 00:05:50.296 user 0m2.657s 00:05:50.296 sys 0m0.388s 00:05:50.296 04:11:35 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.296 04:11:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.296 04:11:35 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.296 04:11:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.296 04:11:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.296 04:11:35 -- common/autotest_common.sh@10 -- # set +x 00:05:50.296 ************************************ 00:05:50.296 START TEST dpdk_mem_utility 00:05:50.296 ************************************ 00:05:50.296 04:11:35 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.554 * Looking for test storage... 00:05:50.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:50.554 04:11:36 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.554 04:11:36 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.554 04:11:36 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.554 04:11:36 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:50.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.554 04:11:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:50.554 04:11:36 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.554 04:11:36 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.555 --rc genhtml_branch_coverage=1 00:05:50.555 --rc genhtml_function_coverage=1 00:05:50.555 --rc genhtml_legend=1 00:05:50.555 --rc geninfo_all_blocks=1 00:05:50.555 --rc geninfo_unexecuted_blocks=1 00:05:50.555 00:05:50.555 ' 00:05:50.555 04:11:36 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.555 --rc genhtml_branch_coverage=1 00:05:50.555 --rc genhtml_function_coverage=1 00:05:50.555 --rc genhtml_legend=1 00:05:50.555 --rc geninfo_all_blocks=1 00:05:50.555 --rc geninfo_unexecuted_blocks=1 00:05:50.555 00:05:50.555 ' 00:05:50.555 04:11:36 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.555 --rc genhtml_branch_coverage=1 00:05:50.555 --rc genhtml_function_coverage=1 00:05:50.555 --rc genhtml_legend=1 00:05:50.555 --rc geninfo_all_blocks=1 00:05:50.555 --rc geninfo_unexecuted_blocks=1 00:05:50.555 00:05:50.555 ' 00:05:50.555 04:11:36 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.555 --rc genhtml_branch_coverage=1 00:05:50.555 --rc genhtml_function_coverage=1 00:05:50.555 --rc genhtml_legend=1 00:05:50.555 --rc geninfo_all_blocks=1 00:05:50.555 --rc geninfo_unexecuted_blocks=1 00:05:50.555 00:05:50.555 ' 00:05:50.555 04:11:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:50.555 04:11:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70248 00:05:50.555 04:11:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70248 00:05:50.555 04:11:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 70248 ']' 00:05:50.555 04:11:36 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.555 04:11:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.555 04:11:36 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.555 04:11:36 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.555 04:11:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:50.555 04:11:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:50.555 [2024-11-17 04:11:36.179670] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:50.555 [2024-11-17 04:11:36.179782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70248 ] 00:05:50.814 [2024-11-17 04:11:36.332894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.814 [2024-11-17 04:11:36.350885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.380 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.380 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:51.380 04:11:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:51.380 04:11:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:51.380 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.380 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.380 { 00:05:51.380 "filename": "/tmp/spdk_mem_dump.txt" 00:05:51.380 } 00:05:51.380 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.380 04:11:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:51.380 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:51.380 1 heaps totaling size 810.000000 MiB 00:05:51.380 size: 810.000000 MiB heap id: 0 00:05:51.380 end heaps---------- 00:05:51.380 9 mempools totaling size 595.772034 MiB 00:05:51.380 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:51.380 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:51.380 size: 92.545471 MiB name: bdev_io_70248 00:05:51.380 size: 50.003479 MiB name: msgpool_70248 00:05:51.380 size: 36.509338 MiB name: fsdev_io_70248 00:05:51.380 size: 21.763794 MiB name: PDU_Pool 00:05:51.380 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:51.380 size: 4.133484 MiB name: evtpool_70248 00:05:51.380 size: 0.026123 MiB name: Session_Pool 00:05:51.380 end mempools------- 00:05:51.380 6 memzones totaling size 4.142822 MiB 00:05:51.380 size: 1.000366 MiB name: RG_ring_0_70248 00:05:51.380 size: 1.000366 MiB name: RG_ring_1_70248 00:05:51.380 size: 1.000366 MiB name: RG_ring_4_70248 00:05:51.380 size: 1.000366 MiB name: RG_ring_5_70248 00:05:51.380 size: 0.125366 MiB name: RG_ring_2_70248 00:05:51.380 size: 0.015991 MiB name: RG_ring_3_70248 00:05:51.380 end memzones------- 00:05:51.380 04:11:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:51.639 heap id: 0 total size: 810.000000 MiB number of busy elements: 308 number of free elements: 15 00:05:51.639 list of free elements. size: 10.814148 MiB 00:05:51.639 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:51.639 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:51.639 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:51.639 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:51.639 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:51.639 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:51.639 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:51.639 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:51.639 element at address: 0x20001a600000 with size: 0.567871 MiB 00:05:51.639 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:51.639 element at address: 0x200000c00000 with size: 0.487000 MiB 00:05:51.640 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:51.640 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:51.640 element at address: 0x200027a00000 with size: 0.396484 MiB 00:05:51.640 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:51.640 list of standard malloc elements. size: 199.266968 MiB 00:05:51.640 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:51.640 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:51.640 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:51.640 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:51.640 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:51.640 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:51.640 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:51.640 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:51.640 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:51.640 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:51.640 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:51.640 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:51.641 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a691600 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a691780 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a691840 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a691900 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692080 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692140 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692200 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692380 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692440 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692500 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692680 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692740 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692800 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692980 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693040 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693100 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693280 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693340 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693400 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693580 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693640 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693700 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693880 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693940 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694000 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694180 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694240 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694300 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694480 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694540 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694600 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694780 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694840 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694900 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a695080 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a695140 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a695200 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:51.641 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a65800 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a658c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6c4c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:05:51.641 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:51.642 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:51.642 list of memzone associated elements. size: 599.918884 MiB 00:05:51.642 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:51.642 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:51.642 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:51.642 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:51.642 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:51.642 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70248_0 00:05:51.642 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:51.642 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70248_0 00:05:51.642 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:51.642 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70248_0 00:05:51.642 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:51.642 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:51.642 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:51.642 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:51.642 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:51.642 associated memzone info: size: 3.000122 MiB name: MP_evtpool_70248_0 00:05:51.642 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:51.642 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70248 00:05:51.642 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:51.642 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70248 00:05:51.642 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:51.642 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:51.642 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:51.642 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:51.642 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:51.642 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:51.642 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:51.642 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:51.642 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:51.642 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70248 00:05:51.642 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:51.642 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70248 00:05:51.642 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:51.642 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70248 00:05:51.642 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:51.642 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70248 00:05:51.642 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:51.642 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70248 00:05:51.642 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:51.642 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70248 00:05:51.642 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:51.642 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:51.642 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:51.642 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:51.642 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:51.642 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:51.642 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:51.642 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_70248 00:05:51.642 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:51.642 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70248 00:05:51.642 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:51.642 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:51.642 element at address: 0x200027a65980 with size: 0.023743 MiB 00:05:51.642 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:51.642 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:51.642 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70248 00:05:51.642 element at address: 0x200027a6bac0 with size: 0.002441 MiB 00:05:51.642 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:51.642 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:51.642 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70248 00:05:51.642 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:51.642 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70248 00:05:51.642 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:51.642 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70248 00:05:51.642 element at address: 0x200027a6c580 with size: 0.000305 MiB 00:05:51.642 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:51.642 04:11:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:51.642 04:11:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70248 00:05:51.642 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 70248 ']' 00:05:51.642 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 70248 00:05:51.642 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:51.642 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.642 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70248 00:05:51.642 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.642 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.642 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70248' 00:05:51.642 killing process with pid 70248 00:05:51.642 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 70248 00:05:51.642 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 70248 00:05:51.904 00:05:51.904 real 0m1.415s 00:05:51.904 user 0m1.494s 00:05:51.904 sys 0m0.333s 00:05:51.904 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.904 04:11:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.904 ************************************ 00:05:51.904 END TEST dpdk_mem_utility 00:05:51.904 ************************************ 00:05:51.904 04:11:37 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:51.904 04:11:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.904 04:11:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.904 04:11:37 -- common/autotest_common.sh@10 -- # set +x 00:05:51.904 ************************************ 00:05:51.904 START TEST event 00:05:51.904 ************************************ 00:05:51.904 04:11:37 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:51.904 * Looking for test storage... 00:05:51.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:51.904 04:11:37 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.905 04:11:37 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.905 04:11:37 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.905 04:11:37 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.905 04:11:37 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.905 04:11:37 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.905 04:11:37 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.905 04:11:37 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.905 04:11:37 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.905 04:11:37 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.905 04:11:37 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.905 04:11:37 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.905 04:11:37 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.905 04:11:37 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.905 04:11:37 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.905 04:11:37 event -- scripts/common.sh@344 -- # case "$op" in 00:05:51.905 04:11:37 event -- scripts/common.sh@345 -- # : 1 00:05:51.905 04:11:37 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.905 04:11:37 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.905 04:11:37 event -- scripts/common.sh@365 -- # decimal 1 00:05:51.905 04:11:37 event -- scripts/common.sh@353 -- # local d=1 00:05:51.905 04:11:37 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.905 04:11:37 event -- scripts/common.sh@355 -- # echo 1 00:05:51.905 04:11:37 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.905 04:11:37 event -- scripts/common.sh@366 -- # decimal 2 00:05:51.905 04:11:37 event -- scripts/common.sh@353 -- # local d=2 00:05:51.905 04:11:37 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.905 04:11:37 event -- scripts/common.sh@355 -- # echo 2 00:05:51.905 04:11:37 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.905 04:11:37 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.905 04:11:37 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.906 04:11:37 event -- scripts/common.sh@368 -- # return 0 00:05:51.906 04:11:37 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.906 04:11:37 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.906 --rc genhtml_branch_coverage=1 00:05:51.906 --rc genhtml_function_coverage=1 00:05:51.906 --rc genhtml_legend=1 00:05:51.906 --rc geninfo_all_blocks=1 00:05:51.906 --rc geninfo_unexecuted_blocks=1 00:05:51.906 00:05:51.906 ' 00:05:51.906 04:11:37 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.906 --rc genhtml_branch_coverage=1 00:05:51.906 --rc genhtml_function_coverage=1 00:05:51.906 --rc genhtml_legend=1 00:05:51.906 --rc geninfo_all_blocks=1 00:05:51.906 --rc geninfo_unexecuted_blocks=1 00:05:51.906 00:05:51.906 ' 00:05:51.906 04:11:37 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.906 --rc genhtml_branch_coverage=1 00:05:51.906 --rc genhtml_function_coverage=1 00:05:51.906 --rc genhtml_legend=1 00:05:51.906 --rc geninfo_all_blocks=1 00:05:51.906 --rc geninfo_unexecuted_blocks=1 00:05:51.906 00:05:51.906 ' 00:05:51.906 04:11:37 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.906 --rc genhtml_branch_coverage=1 00:05:51.906 --rc genhtml_function_coverage=1 00:05:51.906 --rc genhtml_legend=1 00:05:51.906 --rc geninfo_all_blocks=1 00:05:51.906 --rc geninfo_unexecuted_blocks=1 00:05:51.906 00:05:51.906 ' 00:05:51.906 04:11:37 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:51.906 04:11:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:51.906 04:11:37 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.907 04:11:37 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:51.907 04:11:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.907 04:11:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.907 ************************************ 00:05:51.907 START TEST event_perf 00:05:51.907 ************************************ 00:05:51.907 04:11:37 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.907 Running I/O for 1 seconds...[2024-11-17 04:11:37.612360] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:51.907 [2024-11-17 04:11:37.612556] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70329 ] 00:05:52.173 [2024-11-17 04:11:37.769875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.173 [2024-11-17 04:11:37.790470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.173 [2024-11-17 04:11:37.790647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.173 [2024-11-17 04:11:37.791046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.173 Running I/O for 1 seconds...[2024-11-17 04:11:37.791106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.117 00:05:53.117 lcore 0: 196042 00:05:53.117 lcore 1: 196043 00:05:53.117 lcore 2: 196045 00:05:53.117 lcore 3: 196044 00:05:53.117 done. 00:05:53.117 00:05:53.117 real 0m1.246s 00:05:53.117 user 0m4.066s 00:05:53.117 sys 0m0.064s 00:05:53.117 04:11:38 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.117 04:11:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.117 ************************************ 00:05:53.117 END TEST event_perf 00:05:53.117 ************************************ 00:05:53.375 04:11:38 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:53.375 04:11:38 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:53.375 04:11:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.375 04:11:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.375 ************************************ 00:05:53.375 START TEST event_reactor 00:05:53.375 ************************************ 00:05:53.375 04:11:38 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:53.375 [2024-11-17 04:11:38.897413] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:53.375 [2024-11-17 04:11:38.897524] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70363 ] 00:05:53.375 [2024-11-17 04:11:39.053716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.375 [2024-11-17 04:11:39.072694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.751 test_start 00:05:54.751 oneshot 00:05:54.751 tick 100 00:05:54.751 tick 100 00:05:54.751 tick 250 00:05:54.751 tick 100 00:05:54.751 tick 100 00:05:54.751 tick 250 00:05:54.751 tick 100 00:05:54.751 tick 500 00:05:54.751 tick 100 00:05:54.751 tick 100 00:05:54.751 tick 250 00:05:54.751 tick 100 00:05:54.751 tick 100 00:05:54.751 test_end 00:05:54.751 00:05:54.751 real 0m1.243s 00:05:54.751 user 0m1.081s 00:05:54.751 sys 0m0.054s 00:05:54.751 04:11:40 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.751 ************************************ 00:05:54.751 END TEST event_reactor 00:05:54.751 ************************************ 00:05:54.751 04:11:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:54.751 04:11:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:54.751 04:11:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:54.751 04:11:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.751 04:11:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.751 ************************************ 00:05:54.751 START TEST event_reactor_perf 00:05:54.751 ************************************ 00:05:54.751 04:11:40 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:54.751 [2024-11-17 04:11:40.195279] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:54.751 [2024-11-17 04:11:40.195590] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70399 ] 00:05:54.751 [2024-11-17 04:11:40.351257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.751 [2024-11-17 04:11:40.370191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.689 test_start 00:05:55.689 test_end 00:05:55.689 Performance: 318117 events per second 00:05:55.689 00:05:55.689 real 0m1.238s 00:05:55.689 user 0m1.078s 00:05:55.689 sys 0m0.054s 00:05:55.689 04:11:41 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.689 ************************************ 00:05:55.689 END TEST event_reactor_perf 00:05:55.689 ************************************ 00:05:55.689 04:11:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.947 04:11:41 event -- event/event.sh@49 -- # uname -s 00:05:55.947 04:11:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:55.947 04:11:41 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:55.947 04:11:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.947 04:11:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.947 04:11:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.947 ************************************ 00:05:55.947 START TEST event_scheduler 00:05:55.947 ************************************ 00:05:55.947 04:11:41 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:55.947 * Looking for test storage... 00:05:55.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:55.947 04:11:41 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.947 04:11:41 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.947 04:11:41 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.947 04:11:41 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.947 04:11:41 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:55.947 04:11:41 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.947 04:11:41 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.947 --rc genhtml_branch_coverage=1 00:05:55.947 --rc genhtml_function_coverage=1 00:05:55.947 --rc genhtml_legend=1 00:05:55.947 --rc geninfo_all_blocks=1 00:05:55.948 --rc geninfo_unexecuted_blocks=1 00:05:55.948 00:05:55.948 ' 00:05:55.948 04:11:41 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.948 --rc genhtml_branch_coverage=1 00:05:55.948 --rc genhtml_function_coverage=1 00:05:55.948 --rc genhtml_legend=1 00:05:55.948 --rc geninfo_all_blocks=1 00:05:55.948 --rc geninfo_unexecuted_blocks=1 00:05:55.948 00:05:55.948 ' 00:05:55.948 04:11:41 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.948 --rc genhtml_branch_coverage=1 00:05:55.948 --rc genhtml_function_coverage=1 00:05:55.948 --rc genhtml_legend=1 00:05:55.948 --rc geninfo_all_blocks=1 00:05:55.948 --rc geninfo_unexecuted_blocks=1 00:05:55.948 00:05:55.948 ' 00:05:55.948 04:11:41 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.948 --rc genhtml_branch_coverage=1 00:05:55.948 --rc genhtml_function_coverage=1 00:05:55.948 --rc genhtml_legend=1 00:05:55.948 --rc geninfo_all_blocks=1 00:05:55.948 --rc geninfo_unexecuted_blocks=1 00:05:55.948 00:05:55.948 ' 00:05:55.948 04:11:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:55.948 04:11:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70470 00:05:55.948 04:11:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.948 04:11:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:55.948 04:11:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70470 00:05:55.948 04:11:41 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70470 ']' 00:05:55.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.948 04:11:41 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.948 04:11:41 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.948 04:11:41 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.948 04:11:41 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.948 04:11:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.948 [2024-11-17 04:11:41.649890] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:55.948 [2024-11-17 04:11:41.650141] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70470 ] 00:05:56.206 [2024-11-17 04:11:41.799696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.206 [2024-11-17 04:11:41.820586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.206 [2024-11-17 04:11:41.820806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.206 [2024-11-17 04:11:41.821033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.206 [2024-11-17 04:11:41.821110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.773 04:11:42 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.773 04:11:42 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:56.773 04:11:42 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:56.773 04:11:42 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.773 04:11:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.773 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:56.773 POWER: Cannot set governor of lcore 0 to userspace 00:05:56.773 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:56.773 POWER: Cannot set governor of lcore 0 to performance 00:05:56.773 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:56.773 POWER: Cannot set governor of lcore 0 to userspace 00:05:56.773 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:56.773 POWER: Cannot set governor of lcore 0 to userspace 00:05:56.773 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:56.773 POWER: Unable to set Power Management Environment for lcore 0 00:05:56.773 [2024-11-17 04:11:42.486815] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:56.773 [2024-11-17 04:11:42.486845] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:56.773 [2024-11-17 04:11:42.486853] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:56.773 [2024-11-17 04:11:42.486880] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:56.773 [2024-11-17 04:11:42.486888] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:56.773 [2024-11-17 04:11:42.486896] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:56.773 04:11:42 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.773 04:11:42 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:56.773 04:11:42 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.773 04:11:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.032 [2024-11-17 04:11:42.541870] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:57.032 04:11:42 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.032 04:11:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:57.032 04:11:42 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.032 04:11:42 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.032 04:11:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.032 ************************************ 00:05:57.032 START TEST scheduler_create_thread 00:05:57.032 ************************************ 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.032 2 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.032 3 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.032 4 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.032 5 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:57.032 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.033 6 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.033 7 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.033 8 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.033 9 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.033 10 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.033 04:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.603 ************************************ 00:05:57.603 END TEST scheduler_create_thread 00:05:57.603 ************************************ 00:05:57.603 04:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.603 00:05:57.603 real 0m0.594s 00:05:57.603 user 0m0.014s 00:05:57.603 sys 0m0.004s 00:05:57.603 04:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.603 04:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.603 04:11:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:57.603 04:11:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70470 00:05:57.603 04:11:43 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70470 ']' 00:05:57.603 04:11:43 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70470 00:05:57.603 04:11:43 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:57.603 04:11:43 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.603 04:11:43 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70470 00:05:57.603 killing process with pid 70470 00:05:57.603 04:11:43 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:57.603 04:11:43 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:57.603 04:11:43 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70470' 00:05:57.603 04:11:43 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70470 00:05:57.603 04:11:43 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70470 00:05:58.171 [2024-11-17 04:11:43.623889] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:58.171 ************************************ 00:05:58.171 END TEST event_scheduler 00:05:58.171 ************************************ 00:05:58.171 00:05:58.171 real 0m2.302s 00:05:58.171 user 0m4.491s 00:05:58.171 sys 0m0.325s 00:05:58.171 04:11:43 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.171 04:11:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.171 04:11:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:58.171 04:11:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:58.171 04:11:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.171 04:11:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.171 04:11:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.171 ************************************ 00:05:58.171 START TEST app_repeat 00:05:58.171 ************************************ 00:05:58.171 04:11:43 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:58.171 04:11:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.171 04:11:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.171 04:11:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:58.171 04:11:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.171 04:11:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:58.171 04:11:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:58.171 04:11:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:58.171 Process app_repeat pid: 70543 00:05:58.171 spdk_app_start Round 0 00:05:58.171 04:11:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70543 00:05:58.171 04:11:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.171 04:11:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70543' 00:05:58.171 04:11:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:58.171 04:11:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:58.171 04:11:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70543 /var/tmp/spdk-nbd.sock 00:05:58.171 04:11:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70543 ']' 00:05:58.171 04:11:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.171 04:11:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.171 04:11:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:58.171 04:11:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.171 04:11:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.171 04:11:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.171 [2024-11-17 04:11:43.828643] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:58.171 [2024-11-17 04:11:43.828751] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70543 ] 00:05:58.429 [2024-11-17 04:11:43.985962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.429 [2024-11-17 04:11:44.005244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.429 [2024-11-17 04:11:44.005304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.995 04:11:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.995 04:11:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:58.995 04:11:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.254 Malloc0 00:05:59.254 04:11:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.513 Malloc1 00:05:59.513 04:11:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.513 04:11:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.513 04:11:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.513 04:11:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.513 04:11:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.513 04:11:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.513 04:11:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.513 04:11:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.513 04:11:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.513 04:11:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.513 04:11:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.513 04:11:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.513 04:11:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.513 04:11:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.513 04:11:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.513 04:11:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.770 /dev/nbd0 00:05:59.770 04:11:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.770 04:11:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.770 04:11:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:59.770 04:11:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.770 04:11:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.770 04:11:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.770 04:11:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:59.770 04:11:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.770 04:11:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.770 04:11:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.770 04:11:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.770 1+0 records in 00:05:59.770 1+0 records out 00:05:59.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000141322 s, 29.0 MB/s 00:05:59.770 04:11:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.770 04:11:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.770 04:11:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.770 04:11:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.771 04:11:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.771 04:11:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.771 04:11:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.771 04:11:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.029 /dev/nbd1 00:06:00.029 04:11:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.029 04:11:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.029 04:11:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:00.029 04:11:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.029 04:11:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.029 04:11:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.029 04:11:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:00.029 04:11:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.029 04:11:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.029 04:11:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.029 04:11:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.029 1+0 records in 00:06:00.029 1+0 records out 00:06:00.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001941 s, 21.1 MB/s 00:06:00.029 04:11:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.029 04:11:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.029 04:11:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.029 04:11:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.029 04:11:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.029 04:11:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.029 04:11:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.029 04:11:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.029 04:11:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.029 04:11:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.317 { 00:06:00.317 "nbd_device": "/dev/nbd0", 00:06:00.317 "bdev_name": "Malloc0" 00:06:00.317 }, 00:06:00.317 { 00:06:00.317 "nbd_device": "/dev/nbd1", 00:06:00.317 "bdev_name": "Malloc1" 00:06:00.317 } 00:06:00.317 ]' 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.317 { 00:06:00.317 "nbd_device": "/dev/nbd0", 00:06:00.317 "bdev_name": "Malloc0" 00:06:00.317 }, 00:06:00.317 { 00:06:00.317 "nbd_device": "/dev/nbd1", 00:06:00.317 "bdev_name": "Malloc1" 00:06:00.317 } 00:06:00.317 ]' 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.317 /dev/nbd1' 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.317 /dev/nbd1' 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.317 256+0 records in 00:06:00.317 256+0 records out 00:06:00.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0062715 s, 167 MB/s 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.317 04:11:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.318 256+0 records in 00:06:00.318 256+0 records out 00:06:00.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190839 s, 54.9 MB/s 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.318 256+0 records in 00:06:00.318 256+0 records out 00:06:00.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01503 s, 69.8 MB/s 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.318 04:11:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.576 04:11:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.576 04:11:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.576 04:11:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.576 04:11:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.576 04:11:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.576 04:11:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.576 04:11:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.576 04:11:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.576 04:11:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.576 04:11:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.834 04:11:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.835 04:11:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.835 04:11:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.176 04:11:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.176 [2024-11-17 04:11:46.810116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.176 [2024-11-17 04:11:46.826816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.176 [2024-11-17 04:11:46.826821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.176 [2024-11-17 04:11:46.858558] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.176 [2024-11-17 04:11:46.858604] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.456 spdk_app_start Round 1 00:06:04.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.456 04:11:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.456 04:11:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:04.456 04:11:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70543 /var/tmp/spdk-nbd.sock 00:06:04.456 04:11:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70543 ']' 00:06:04.456 04:11:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.456 04:11:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.456 04:11:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.456 04:11:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.456 04:11:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.456 04:11:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.456 04:11:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:04.456 04:11:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.456 Malloc0 00:06:04.456 04:11:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.714 Malloc1 00:06:04.714 04:11:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.714 04:11:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.714 04:11:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.714 04:11:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.714 04:11:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.714 04:11:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.714 04:11:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.714 04:11:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.714 04:11:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.714 04:11:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.714 04:11:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.714 04:11:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.714 04:11:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.714 04:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.714 04:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.714 04:11:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.973 /dev/nbd0 00:06:04.973 04:11:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.973 04:11:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.973 04:11:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:04.973 04:11:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:04.973 04:11:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.973 04:11:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.973 04:11:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:04.973 04:11:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:04.973 04:11:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.973 04:11:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.973 04:11:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.973 1+0 records in 00:06:04.973 1+0 records out 00:06:04.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233367 s, 17.6 MB/s 00:06:04.973 04:11:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.973 04:11:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:04.973 04:11:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.973 04:11:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.973 04:11:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:04.973 04:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.973 04:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.973 04:11:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.233 /dev/nbd1 00:06:05.233 04:11:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.233 04:11:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.233 04:11:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:05.233 04:11:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.233 04:11:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.233 04:11:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.233 04:11:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:05.233 04:11:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.233 04:11:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.233 04:11:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.233 04:11:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.233 1+0 records in 00:06:05.233 1+0 records out 00:06:05.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253479 s, 16.2 MB/s 00:06:05.233 04:11:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.233 04:11:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.233 04:11:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.233 04:11:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.233 04:11:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.233 04:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.233 04:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.233 04:11:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.233 04:11:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.233 04:11:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.491 { 00:06:05.491 "nbd_device": "/dev/nbd0", 00:06:05.491 "bdev_name": "Malloc0" 00:06:05.491 }, 00:06:05.491 { 00:06:05.491 "nbd_device": "/dev/nbd1", 00:06:05.491 "bdev_name": "Malloc1" 00:06:05.491 } 00:06:05.491 ]' 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.491 { 00:06:05.491 "nbd_device": "/dev/nbd0", 00:06:05.491 "bdev_name": "Malloc0" 00:06:05.491 }, 00:06:05.491 { 00:06:05.491 "nbd_device": "/dev/nbd1", 00:06:05.491 "bdev_name": "Malloc1" 00:06:05.491 } 00:06:05.491 ]' 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.491 /dev/nbd1' 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.491 /dev/nbd1' 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.491 256+0 records in 00:06:05.491 256+0 records out 00:06:05.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00963035 s, 109 MB/s 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.491 04:11:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.491 256+0 records in 00:06:05.491 256+0 records out 00:06:05.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164241 s, 63.8 MB/s 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.492 256+0 records in 00:06:05.492 256+0 records out 00:06:05.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157491 s, 66.6 MB/s 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.492 04:11:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.749 04:11:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.750 04:11:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.750 04:11:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.750 04:11:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.750 04:11:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.750 04:11:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.750 04:11:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.750 04:11:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.750 04:11:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.750 04:11:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.007 04:11:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.007 04:11:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.007 04:11:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.007 04:11:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.007 04:11:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.007 04:11:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.007 04:11:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.007 04:11:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.007 04:11:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.007 04:11:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.007 04:11:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.265 04:11:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.265 04:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.265 04:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.265 04:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.265 04:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.265 04:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.265 04:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.265 04:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.265 04:11:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.265 04:11:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.265 04:11:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.265 04:11:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.265 04:11:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.523 04:11:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.523 [2024-11-17 04:11:52.084037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.523 [2024-11-17 04:11:52.099777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.523 [2024-11-17 04:11:52.099780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.523 [2024-11-17 04:11:52.129166] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.523 [2024-11-17 04:11:52.129208] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.804 04:11:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.804 spdk_app_start Round 2 00:06:09.804 04:11:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:09.804 04:11:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70543 /var/tmp/spdk-nbd.sock 00:06:09.804 04:11:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70543 ']' 00:06:09.804 04:11:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.804 04:11:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.804 04:11:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.804 04:11:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.804 04:11:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.804 04:11:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.804 04:11:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:09.804 04:11:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.804 Malloc0 00:06:09.804 04:11:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.063 Malloc1 00:06:10.063 04:11:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.063 /dev/nbd0 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.063 04:11:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.063 04:11:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:10.063 04:11:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.063 04:11:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.063 04:11:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.063 04:11:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:10.063 04:11:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.063 04:11:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.063 04:11:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.063 04:11:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.063 1+0 records in 00:06:10.063 1+0 records out 00:06:10.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232523 s, 17.6 MB/s 00:06:10.063 04:11:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.321 04:11:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.321 04:11:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.321 04:11:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.321 04:11:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.321 04:11:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.321 04:11:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.321 04:11:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.321 /dev/nbd1 00:06:10.321 04:11:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.321 04:11:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.321 04:11:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:10.321 04:11:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.321 04:11:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.321 04:11:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.321 04:11:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:10.321 04:11:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.321 04:11:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.321 04:11:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.321 04:11:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.321 1+0 records in 00:06:10.322 1+0 records out 00:06:10.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040381 s, 10.1 MB/s 00:06:10.322 04:11:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.322 04:11:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.322 04:11:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.322 04:11:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.322 04:11:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.322 04:11:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.322 04:11:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.322 04:11:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.322 04:11:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.322 04:11:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.580 { 00:06:10.580 "nbd_device": "/dev/nbd0", 00:06:10.580 "bdev_name": "Malloc0" 00:06:10.580 }, 00:06:10.580 { 00:06:10.580 "nbd_device": "/dev/nbd1", 00:06:10.580 "bdev_name": "Malloc1" 00:06:10.580 } 00:06:10.580 ]' 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.580 { 00:06:10.580 "nbd_device": "/dev/nbd0", 00:06:10.580 "bdev_name": "Malloc0" 00:06:10.580 }, 00:06:10.580 { 00:06:10.580 "nbd_device": "/dev/nbd1", 00:06:10.580 "bdev_name": "Malloc1" 00:06:10.580 } 00:06:10.580 ]' 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.580 /dev/nbd1' 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.580 /dev/nbd1' 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.580 256+0 records in 00:06:10.580 256+0 records out 00:06:10.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00894053 s, 117 MB/s 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.580 256+0 records in 00:06:10.580 256+0 records out 00:06:10.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153432 s, 68.3 MB/s 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.580 256+0 records in 00:06:10.580 256+0 records out 00:06:10.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152252 s, 68.9 MB/s 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.580 04:11:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.838 04:11:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.839 04:11:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.097 04:11:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.097 04:11:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.097 04:11:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.097 04:11:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.097 04:11:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.097 04:11:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.097 04:11:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.097 04:11:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.097 04:11:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.097 04:11:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.097 04:11:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.357 04:11:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.357 04:11:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.357 04:11:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.357 04:11:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.357 04:11:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.357 04:11:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.357 04:11:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.357 04:11:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.357 04:11:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.357 04:11:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.357 04:11:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.357 04:11:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.357 04:11:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.615 04:11:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:11.615 [2024-11-17 04:11:57.197371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.615 [2024-11-17 04:11:57.213721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.615 [2024-11-17 04:11:57.213723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.615 [2024-11-17 04:11:57.243240] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.615 [2024-11-17 04:11:57.243287] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.900 04:12:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70543 /var/tmp/spdk-nbd.sock 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70543 ']' 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:14.900 04:12:00 event.app_repeat -- event/event.sh@39 -- # killprocess 70543 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70543 ']' 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70543 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70543 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.900 killing process with pid 70543 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70543' 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70543 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70543 00:06:14.900 spdk_app_start is called in Round 0. 00:06:14.900 Shutdown signal received, stop current app iteration 00:06:14.900 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization... 00:06:14.900 spdk_app_start is called in Round 1. 00:06:14.900 Shutdown signal received, stop current app iteration 00:06:14.900 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization... 00:06:14.900 spdk_app_start is called in Round 2. 00:06:14.900 Shutdown signal received, stop current app iteration 00:06:14.900 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization... 00:06:14.900 spdk_app_start is called in Round 3. 00:06:14.900 Shutdown signal received, stop current app iteration 00:06:14.900 04:12:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:14.900 04:12:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:14.900 00:06:14.900 real 0m16.675s 00:06:14.900 user 0m37.280s 00:06:14.900 sys 0m1.977s 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.900 04:12:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.900 ************************************ 00:06:14.900 END TEST app_repeat 00:06:14.900 ************************************ 00:06:14.900 04:12:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:14.900 04:12:00 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:14.900 04:12:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.900 04:12:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.900 04:12:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.900 ************************************ 00:06:14.900 START TEST cpu_locks 00:06:14.900 ************************************ 00:06:14.900 04:12:00 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:14.900 * Looking for test storage... 00:06:14.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:14.900 04:12:00 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.900 04:12:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.900 04:12:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.159 04:12:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.159 04:12:00 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:15.159 04:12:00 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.159 04:12:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.159 --rc genhtml_branch_coverage=1 00:06:15.159 --rc genhtml_function_coverage=1 00:06:15.159 --rc genhtml_legend=1 00:06:15.159 --rc geninfo_all_blocks=1 00:06:15.159 --rc geninfo_unexecuted_blocks=1 00:06:15.159 00:06:15.159 ' 00:06:15.159 04:12:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.159 --rc genhtml_branch_coverage=1 00:06:15.159 --rc genhtml_function_coverage=1 00:06:15.159 --rc genhtml_legend=1 00:06:15.159 --rc geninfo_all_blocks=1 00:06:15.159 --rc geninfo_unexecuted_blocks=1 00:06:15.159 00:06:15.159 ' 00:06:15.159 04:12:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.159 --rc genhtml_branch_coverage=1 00:06:15.159 --rc genhtml_function_coverage=1 00:06:15.159 --rc genhtml_legend=1 00:06:15.159 --rc geninfo_all_blocks=1 00:06:15.159 --rc geninfo_unexecuted_blocks=1 00:06:15.159 00:06:15.159 ' 00:06:15.159 04:12:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.159 --rc genhtml_branch_coverage=1 00:06:15.159 --rc genhtml_function_coverage=1 00:06:15.159 --rc genhtml_legend=1 00:06:15.159 --rc geninfo_all_blocks=1 00:06:15.159 --rc geninfo_unexecuted_blocks=1 00:06:15.159 00:06:15.159 ' 00:06:15.159 04:12:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:15.159 04:12:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:15.159 04:12:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:15.159 04:12:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:15.159 04:12:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.159 04:12:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.159 04:12:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.159 ************************************ 00:06:15.159 START TEST default_locks 00:06:15.159 ************************************ 00:06:15.159 04:12:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:15.159 04:12:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70962 00:06:15.159 04:12:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70962 00:06:15.159 04:12:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70962 ']' 00:06:15.159 04:12:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.159 04:12:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.159 04:12:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.159 04:12:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.159 04:12:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.159 04:12:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.159 [2024-11-17 04:12:00.742917] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:15.159 [2024-11-17 04:12:00.743034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70962 ] 00:06:15.457 [2024-11-17 04:12:00.896997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.457 [2024-11-17 04:12:00.913969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70962 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70962 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70962 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70962 ']' 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70962 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70962 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.046 killing process with pid 70962 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70962' 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70962 00:06:16.046 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70962 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70962 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70962 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70962 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70962 ']' 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.307 ERROR: process (pid: 70962) is no longer running 00:06:16.307 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70962) - No such process 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.307 00:06:16.307 real 0m1.268s 00:06:16.307 user 0m1.307s 00:06:16.307 sys 0m0.358s 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.307 04:12:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.307 ************************************ 00:06:16.307 END TEST default_locks 00:06:16.307 ************************************ 00:06:16.307 04:12:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:16.307 04:12:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.307 04:12:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.307 04:12:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.307 ************************************ 00:06:16.307 START TEST default_locks_via_rpc 00:06:16.307 ************************************ 00:06:16.307 04:12:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:16.307 04:12:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71010 00:06:16.307 04:12:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71010 00:06:16.307 04:12:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71010 ']' 00:06:16.307 04:12:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.307 04:12:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.307 04:12:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.307 04:12:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.307 04:12:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.307 04:12:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.567 [2024-11-17 04:12:02.063870] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:16.567 [2024-11-17 04:12:02.063989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71010 ] 00:06:16.567 [2024-11-17 04:12:02.215339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.567 [2024-11-17 04:12:02.234355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.502 04:12:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.502 04:12:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.502 04:12:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:17.502 04:12:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.502 04:12:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.502 04:12:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.502 04:12:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:17.502 04:12:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:17.502 04:12:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:17.502 04:12:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:17.503 04:12:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.503 04:12:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.503 04:12:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.503 04:12:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.503 04:12:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71010 00:06:17.503 04:12:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.503 04:12:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71010 00:06:17.503 04:12:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71010 00:06:17.503 04:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 71010 ']' 00:06:17.503 04:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 71010 00:06:17.503 04:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:17.503 04:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.503 04:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71010 00:06:17.503 04:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.503 04:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.503 killing process with pid 71010 00:06:17.503 04:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71010' 00:06:17.503 04:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 71010 00:06:17.503 04:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 71010 00:06:17.760 00:06:17.760 real 0m1.329s 00:06:17.760 user 0m1.351s 00:06:17.760 sys 0m0.385s 00:06:17.760 04:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.760 ************************************ 00:06:17.760 END TEST default_locks_via_rpc 00:06:17.760 ************************************ 00:06:17.760 04:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.760 04:12:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:17.760 04:12:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.760 04:12:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.760 04:12:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.760 ************************************ 00:06:17.760 START TEST non_locking_app_on_locked_coremask 00:06:17.760 ************************************ 00:06:17.760 04:12:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:17.760 04:12:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71051 00:06:17.760 04:12:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71051 /var/tmp/spdk.sock 00:06:17.760 04:12:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71051 ']' 00:06:17.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.760 04:12:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.760 04:12:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.760 04:12:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.760 04:12:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.760 04:12:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.760 04:12:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.760 [2024-11-17 04:12:03.438136] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:17.760 [2024-11-17 04:12:03.438260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71051 ] 00:06:18.018 [2024-11-17 04:12:03.593288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.018 [2024-11-17 04:12:03.612195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.587 04:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.587 04:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.587 04:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71067 00:06:18.587 04:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71067 /var/tmp/spdk2.sock 00:06:18.587 04:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:18.587 04:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71067 ']' 00:06:18.587 04:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.587 04:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.587 04:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.587 04:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.587 04:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.587 [2024-11-17 04:12:04.310593] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:18.587 [2024-11-17 04:12:04.310705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71067 ] 00:06:18.847 [2024-11-17 04:12:04.484253] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.847 [2024-11-17 04:12:04.484301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.847 [2024-11-17 04:12:04.522688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.418 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.418 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.418 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71051 00:06:19.418 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71051 00:06:19.418 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.985 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71051 00:06:19.985 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71051 ']' 00:06:19.985 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71051 00:06:19.985 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:19.985 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.985 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71051 00:06:19.985 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.985 killing process with pid 71051 00:06:19.985 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.985 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71051' 00:06:19.985 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71051 00:06:19.985 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71051 00:06:20.247 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71067 00:06:20.247 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71067 ']' 00:06:20.247 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71067 00:06:20.247 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.247 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.247 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71067 00:06:20.247 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.247 killing process with pid 71067 00:06:20.247 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.247 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71067' 00:06:20.247 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71067 00:06:20.247 04:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71067 00:06:20.506 00:06:20.506 real 0m2.849s 00:06:20.506 user 0m3.116s 00:06:20.506 sys 0m0.766s 00:06:20.506 ************************************ 00:06:20.506 END TEST non_locking_app_on_locked_coremask 00:06:20.506 ************************************ 00:06:20.506 04:12:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.506 04:12:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.766 04:12:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:20.766 04:12:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.766 04:12:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.766 04:12:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.766 ************************************ 00:06:20.766 START TEST locking_app_on_unlocked_coremask 00:06:20.766 ************************************ 00:06:20.766 04:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:20.766 04:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71125 00:06:20.766 04:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71125 /var/tmp/spdk.sock 00:06:20.766 04:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:20.766 04:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71125 ']' 00:06:20.766 04:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.766 04:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.766 04:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.766 04:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.766 04:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.766 [2024-11-17 04:12:06.340411] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:20.766 [2024-11-17 04:12:06.340529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71125 ] 00:06:21.026 [2024-11-17 04:12:06.495550] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.026 [2024-11-17 04:12:06.495599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.026 [2024-11-17 04:12:06.514233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.594 04:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.594 04:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:21.594 04:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71141 00:06:21.594 04:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71141 /var/tmp/spdk2.sock 00:06:21.594 04:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71141 ']' 00:06:21.595 04:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:21.595 04:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.595 04:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.595 04:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.595 04:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.595 04:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.595 [2024-11-17 04:12:07.253156] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:21.595 [2024-11-17 04:12:07.253297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71141 ] 00:06:21.854 [2024-11-17 04:12:07.436146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.854 [2024-11-17 04:12:07.493942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.422 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.422 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.422 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71141 00:06:22.422 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71141 00:06:22.422 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.680 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71125 00:06:22.680 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71125 ']' 00:06:22.680 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71125 00:06:22.680 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:22.680 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.680 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71125 00:06:22.941 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.941 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.941 killing process with pid 71125 00:06:22.941 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71125' 00:06:22.941 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71125 00:06:22.941 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71125 00:06:23.541 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71141 00:06:23.541 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71141 ']' 00:06:23.541 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71141 00:06:23.541 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.541 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.541 04:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71141 00:06:23.541 04:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.541 killing process with pid 71141 00:06:23.541 04:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.541 04:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71141' 00:06:23.541 04:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71141 00:06:23.541 04:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71141 00:06:23.801 00:06:23.801 real 0m3.111s 00:06:23.801 user 0m3.346s 00:06:23.801 sys 0m0.874s 00:06:23.801 04:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.801 ************************************ 00:06:23.801 END TEST locking_app_on_unlocked_coremask 00:06:23.801 ************************************ 00:06:23.801 04:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.801 04:12:09 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:23.801 04:12:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.801 04:12:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.801 04:12:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.801 ************************************ 00:06:23.801 START TEST locking_app_on_locked_coremask 00:06:23.801 ************************************ 00:06:23.801 04:12:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:23.801 04:12:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71199 00:06:23.801 04:12:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71199 /var/tmp/spdk.sock 00:06:23.801 04:12:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71199 ']' 00:06:23.801 04:12:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.801 04:12:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.801 04:12:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.801 04:12:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.801 04:12:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.801 04:12:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.062 [2024-11-17 04:12:09.536468] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:24.062 [2024-11-17 04:12:09.536614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71199 ] 00:06:24.062 [2024-11-17 04:12:09.696808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.062 [2024-11-17 04:12:09.728829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71215 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71215 /var/tmp/spdk2.sock 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71215 /var/tmp/spdk2.sock 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71215 /var/tmp/spdk2.sock 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71215 ']' 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.005 04:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.005 [2024-11-17 04:12:10.473490] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:25.005 [2024-11-17 04:12:10.473643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71215 ] 00:06:25.005 [2024-11-17 04:12:10.651122] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71199 has claimed it. 00:06:25.005 [2024-11-17 04:12:10.651234] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.578 ERROR: process (pid: 71215) is no longer running 00:06:25.578 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71215) - No such process 00:06:25.578 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.578 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:25.578 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:25.578 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.578 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.578 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.578 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71199 00:06:25.578 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71199 00:06:25.578 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.578 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71199 00:06:25.578 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71199 ']' 00:06:25.578 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71199 00:06:25.578 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:25.838 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.838 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71199 00:06:25.838 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.838 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.838 killing process with pid 71199 00:06:25.838 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71199' 00:06:25.838 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71199 00:06:25.838 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71199 00:06:26.100 00:06:26.100 real 0m2.192s 00:06:26.100 user 0m2.415s 00:06:26.100 sys 0m0.603s 00:06:26.100 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.100 04:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.100 ************************************ 00:06:26.100 END TEST locking_app_on_locked_coremask 00:06:26.100 ************************************ 00:06:26.100 04:12:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:26.100 04:12:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.100 04:12:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.100 04:12:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.100 ************************************ 00:06:26.100 START TEST locking_overlapped_coremask 00:06:26.100 ************************************ 00:06:26.100 04:12:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:26.100 04:12:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71257 00:06:26.100 04:12:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71257 /var/tmp/spdk.sock 00:06:26.100 04:12:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71257 ']' 00:06:26.100 04:12:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.100 04:12:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.100 04:12:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.100 04:12:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.100 04:12:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.100 04:12:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:26.100 [2024-11-17 04:12:11.790066] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:26.100 [2024-11-17 04:12:11.790231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71257 ] 00:06:26.361 [2024-11-17 04:12:11.946537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.361 [2024-11-17 04:12:11.979478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.361 [2024-11-17 04:12:11.979647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.361 [2024-11-17 04:12:11.979735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71275 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71275 /var/tmp/spdk2.sock 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71275 /var/tmp/spdk2.sock 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71275 /var/tmp/spdk2.sock 00:06:26.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71275 ']' 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.933 04:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.934 04:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.195 [2024-11-17 04:12:12.704870] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:27.195 [2024-11-17 04:12:12.704991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71275 ] 00:06:27.195 [2024-11-17 04:12:12.876495] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71257 has claimed it. 00:06:27.195 [2024-11-17 04:12:12.876555] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.767 ERROR: process (pid: 71275) is no longer running 00:06:27.767 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71275) - No such process 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71257 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71257 ']' 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71257 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71257 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.767 killing process with pid 71257 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71257' 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71257 00:06:27.767 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71257 00:06:28.029 00:06:28.029 real 0m1.911s 00:06:28.029 user 0m5.124s 00:06:28.029 sys 0m0.515s 00:06:28.029 ************************************ 00:06:28.029 END TEST locking_overlapped_coremask 00:06:28.029 ************************************ 00:06:28.029 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.029 04:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.029 04:12:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:28.029 04:12:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.029 04:12:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.029 04:12:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.029 ************************************ 00:06:28.029 START TEST locking_overlapped_coremask_via_rpc 00:06:28.029 ************************************ 00:06:28.029 04:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:28.029 04:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71317 00:06:28.029 04:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71317 /var/tmp/spdk.sock 00:06:28.029 04:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71317 ']' 00:06:28.029 04:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.029 04:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.029 04:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.029 04:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.029 04:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.029 04:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:28.029 [2024-11-17 04:12:13.749929] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:28.029 [2024-11-17 04:12:13.750053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71317 ] 00:06:28.290 [2024-11-17 04:12:13.907335] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.290 [2024-11-17 04:12:13.907387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.290 [2024-11-17 04:12:13.928249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.290 [2024-11-17 04:12:13.928559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.290 [2024-11-17 04:12:13.928619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.228 04:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.228 04:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:29.228 04:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71335 00:06:29.228 04:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71335 /var/tmp/spdk2.sock 00:06:29.228 04:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71335 ']' 00:06:29.228 04:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.228 04:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.228 04:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.228 04:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.228 04:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:29.228 04:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.228 [2024-11-17 04:12:14.677086] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:29.228 [2024-11-17 04:12:14.677204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71335 ] 00:06:29.228 [2024-11-17 04:12:14.856543] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.228 [2024-11-17 04:12:14.856608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.228 [2024-11-17 04:12:14.898595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.228 [2024-11-17 04:12:14.901453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.228 [2024-11-17 04:12:14.901495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.170 [2024-11-17 04:12:15.542537] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71317 has claimed it. 00:06:30.170 request: 00:06:30.170 { 00:06:30.170 "method": "framework_enable_cpumask_locks", 00:06:30.170 "req_id": 1 00:06:30.170 } 00:06:30.170 Got JSON-RPC error response 00:06:30.170 response: 00:06:30.170 { 00:06:30.170 "code": -32603, 00:06:30.170 "message": "Failed to claim CPU core: 2" 00:06:30.170 } 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71317 /var/tmp/spdk.sock 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71317 ']' 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71335 /var/tmp/spdk2.sock 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71335 ']' 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.170 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.430 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.430 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.430 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:30.430 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.430 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.430 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.430 00:06:30.430 real 0m2.319s 00:06:30.430 user 0m1.132s 00:06:30.430 sys 0m0.111s 00:06:30.430 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.430 04:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.430 ************************************ 00:06:30.430 END TEST locking_overlapped_coremask_via_rpc 00:06:30.430 ************************************ 00:06:30.430 04:12:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:30.430 04:12:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71317 ]] 00:06:30.430 04:12:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71317 00:06:30.430 04:12:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71317 ']' 00:06:30.430 04:12:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71317 00:06:30.430 04:12:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:30.430 04:12:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.430 04:12:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71317 00:06:30.430 04:12:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.430 04:12:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.430 killing process with pid 71317 00:06:30.430 04:12:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71317' 00:06:30.430 04:12:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71317 00:06:30.430 04:12:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71317 00:06:30.691 04:12:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71335 ]] 00:06:30.691 04:12:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71335 00:06:30.691 04:12:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71335 ']' 00:06:30.691 04:12:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71335 00:06:30.691 04:12:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:30.691 04:12:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.691 04:12:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71335 00:06:30.691 04:12:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:30.691 04:12:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:30.691 killing process with pid 71335 00:06:30.691 04:12:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71335' 00:06:30.691 04:12:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71335 00:06:30.691 04:12:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71335 00:06:30.952 04:12:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.952 04:12:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:30.952 04:12:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71317 ]] 00:06:30.952 04:12:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71317 00:06:30.952 04:12:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71317 ']' 00:06:30.952 04:12:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71317 00:06:30.952 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71317) - No such process 00:06:30.952 Process with pid 71317 is not found 00:06:30.952 04:12:16 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71317 is not found' 00:06:30.952 04:12:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71335 ]] 00:06:30.952 04:12:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71335 00:06:30.952 04:12:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71335 ']' 00:06:30.952 04:12:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71335 00:06:30.952 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71335) - No such process 00:06:30.952 Process with pid 71335 is not found 00:06:30.952 04:12:16 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71335 is not found' 00:06:30.952 04:12:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.952 00:06:30.952 real 0m16.090s 00:06:30.952 user 0m28.352s 00:06:30.952 sys 0m4.359s 00:06:30.952 04:12:16 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.952 04:12:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.952 ************************************ 00:06:30.952 END TEST cpu_locks 00:06:30.952 ************************************ 00:06:30.952 00:06:30.952 real 0m39.224s 00:06:30.952 user 1m16.518s 00:06:30.952 sys 0m7.054s 00:06:30.952 04:12:16 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.952 ************************************ 00:06:30.952 END TEST event 00:06:30.952 ************************************ 00:06:30.952 04:12:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.214 04:12:16 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:31.214 04:12:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.214 04:12:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.214 04:12:16 -- common/autotest_common.sh@10 -- # set +x 00:06:31.214 ************************************ 00:06:31.214 START TEST thread 00:06:31.214 ************************************ 00:06:31.214 04:12:16 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:31.214 * Looking for test storage... 00:06:31.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:31.214 04:12:16 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:31.214 04:12:16 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:31.214 04:12:16 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:31.214 04:12:16 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:31.214 04:12:16 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.214 04:12:16 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.214 04:12:16 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.214 04:12:16 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.214 04:12:16 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.214 04:12:16 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.214 04:12:16 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.214 04:12:16 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.214 04:12:16 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.214 04:12:16 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.214 04:12:16 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.214 04:12:16 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:31.214 04:12:16 thread -- scripts/common.sh@345 -- # : 1 00:06:31.214 04:12:16 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.214 04:12:16 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.214 04:12:16 thread -- scripts/common.sh@365 -- # decimal 1 00:06:31.214 04:12:16 thread -- scripts/common.sh@353 -- # local d=1 00:06:31.214 04:12:16 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.214 04:12:16 thread -- scripts/common.sh@355 -- # echo 1 00:06:31.214 04:12:16 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.214 04:12:16 thread -- scripts/common.sh@366 -- # decimal 2 00:06:31.214 04:12:16 thread -- scripts/common.sh@353 -- # local d=2 00:06:31.214 04:12:16 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.214 04:12:16 thread -- scripts/common.sh@355 -- # echo 2 00:06:31.214 04:12:16 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.214 04:12:16 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.214 04:12:16 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.214 04:12:16 thread -- scripts/common.sh@368 -- # return 0 00:06:31.214 04:12:16 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.214 04:12:16 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:31.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.214 --rc genhtml_branch_coverage=1 00:06:31.214 --rc genhtml_function_coverage=1 00:06:31.214 --rc genhtml_legend=1 00:06:31.214 --rc geninfo_all_blocks=1 00:06:31.214 --rc geninfo_unexecuted_blocks=1 00:06:31.214 00:06:31.214 ' 00:06:31.214 04:12:16 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:31.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.214 --rc genhtml_branch_coverage=1 00:06:31.214 --rc genhtml_function_coverage=1 00:06:31.214 --rc genhtml_legend=1 00:06:31.214 --rc geninfo_all_blocks=1 00:06:31.214 --rc geninfo_unexecuted_blocks=1 00:06:31.214 00:06:31.214 ' 00:06:31.214 04:12:16 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:31.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.214 --rc genhtml_branch_coverage=1 00:06:31.214 --rc genhtml_function_coverage=1 00:06:31.214 --rc genhtml_legend=1 00:06:31.214 --rc geninfo_all_blocks=1 00:06:31.214 --rc geninfo_unexecuted_blocks=1 00:06:31.214 00:06:31.214 ' 00:06:31.214 04:12:16 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:31.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.214 --rc genhtml_branch_coverage=1 00:06:31.214 --rc genhtml_function_coverage=1 00:06:31.214 --rc genhtml_legend=1 00:06:31.214 --rc geninfo_all_blocks=1 00:06:31.214 --rc geninfo_unexecuted_blocks=1 00:06:31.214 00:06:31.214 ' 00:06:31.214 04:12:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.214 04:12:16 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:31.214 04:12:16 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.214 04:12:16 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.214 ************************************ 00:06:31.214 START TEST thread_poller_perf 00:06:31.214 ************************************ 00:06:31.214 04:12:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.214 [2024-11-17 04:12:16.893711] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:31.214 [2024-11-17 04:12:16.893825] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71462 ] 00:06:31.476 [2024-11-17 04:12:17.050607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.476 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:31.476 [2024-11-17 04:12:17.069147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.419 [2024-11-17T04:12:18.146Z] ====================================== 00:06:32.419 [2024-11-17T04:12:18.146Z] busy:2611938082 (cyc) 00:06:32.419 [2024-11-17T04:12:18.146Z] total_run_count: 306000 00:06:32.419 [2024-11-17T04:12:18.146Z] tsc_hz: 2600000000 (cyc) 00:06:32.419 [2024-11-17T04:12:18.146Z] ====================================== 00:06:32.419 [2024-11-17T04:12:18.146Z] poller_cost: 8535 (cyc), 3282 (nsec) 00:06:32.419 00:06:32.419 real 0m1.273s 00:06:32.419 user 0m1.110s 00:06:32.419 sys 0m0.056s 00:06:32.419 ************************************ 00:06:32.419 END TEST thread_poller_perf 00:06:32.419 ************************************ 00:06:32.419 04:12:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.419 04:12:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.680 04:12:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.680 04:12:18 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:32.680 04:12:18 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.680 04:12:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.680 ************************************ 00:06:32.681 START TEST thread_poller_perf 00:06:32.681 ************************************ 00:06:32.681 04:12:18 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.681 [2024-11-17 04:12:18.225631] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:32.681 [2024-11-17 04:12:18.226179] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71504 ] 00:06:32.681 [2024-11-17 04:12:18.386537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.942 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:32.942 [2024-11-17 04:12:18.415665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.946 [2024-11-17T04:12:19.673Z] ====================================== 00:06:33.946 [2024-11-17T04:12:19.673Z] busy:2603418166 (cyc) 00:06:33.946 [2024-11-17T04:12:19.673Z] total_run_count: 3959000 00:06:33.946 [2024-11-17T04:12:19.673Z] tsc_hz: 2600000000 (cyc) 00:06:33.946 [2024-11-17T04:12:19.673Z] ====================================== 00:06:33.946 [2024-11-17T04:12:19.673Z] poller_cost: 657 (cyc), 252 (nsec) 00:06:33.946 00:06:33.946 real 0m1.272s 00:06:33.946 user 0m1.084s 00:06:33.946 sys 0m0.079s 00:06:33.946 ************************************ 00:06:33.946 END TEST thread_poller_perf 00:06:33.946 ************************************ 00:06:33.946 04:12:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.946 04:12:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.946 04:12:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:33.946 ************************************ 00:06:33.946 END TEST thread 00:06:33.946 ************************************ 00:06:33.946 00:06:33.946 real 0m2.799s 00:06:33.946 user 0m2.306s 00:06:33.946 sys 0m0.244s 00:06:33.946 04:12:19 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.946 04:12:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.946 04:12:19 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:33.946 04:12:19 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:33.946 04:12:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.946 04:12:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.946 04:12:19 -- common/autotest_common.sh@10 -- # set +x 00:06:33.946 ************************************ 00:06:33.946 START TEST app_cmdline 00:06:33.946 ************************************ 00:06:33.947 04:12:19 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:33.947 * Looking for test storage... 00:06:33.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:33.947 04:12:19 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.947 04:12:19 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.947 04:12:19 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.208 04:12:19 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.208 04:12:19 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:34.208 04:12:19 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.208 04:12:19 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.208 --rc genhtml_branch_coverage=1 00:06:34.208 --rc genhtml_function_coverage=1 00:06:34.208 --rc genhtml_legend=1 00:06:34.208 --rc geninfo_all_blocks=1 00:06:34.208 --rc geninfo_unexecuted_blocks=1 00:06:34.208 00:06:34.208 ' 00:06:34.208 04:12:19 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.208 --rc genhtml_branch_coverage=1 00:06:34.208 --rc genhtml_function_coverage=1 00:06:34.208 --rc genhtml_legend=1 00:06:34.208 --rc geninfo_all_blocks=1 00:06:34.208 --rc geninfo_unexecuted_blocks=1 00:06:34.208 00:06:34.208 ' 00:06:34.208 04:12:19 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.208 --rc genhtml_branch_coverage=1 00:06:34.208 --rc genhtml_function_coverage=1 00:06:34.208 --rc genhtml_legend=1 00:06:34.208 --rc geninfo_all_blocks=1 00:06:34.208 --rc geninfo_unexecuted_blocks=1 00:06:34.208 00:06:34.208 ' 00:06:34.208 04:12:19 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.208 --rc genhtml_branch_coverage=1 00:06:34.208 --rc genhtml_function_coverage=1 00:06:34.208 --rc genhtml_legend=1 00:06:34.208 --rc geninfo_all_blocks=1 00:06:34.208 --rc geninfo_unexecuted_blocks=1 00:06:34.208 00:06:34.208 ' 00:06:34.208 04:12:19 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:34.208 04:12:19 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71582 00:06:34.209 04:12:19 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71582 00:06:34.209 04:12:19 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71582 ']' 00:06:34.209 04:12:19 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.209 04:12:19 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:34.209 04:12:19 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.209 04:12:19 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.209 04:12:19 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.209 04:12:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.209 [2024-11-17 04:12:19.796845] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:34.209 [2024-11-17 04:12:19.796968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71582 ] 00:06:34.470 [2024-11-17 04:12:19.946987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.470 [2024-11-17 04:12:19.966290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.044 04:12:20 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.044 04:12:20 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:35.044 04:12:20 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:35.305 { 00:06:35.305 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:06:35.305 "fields": { 00:06:35.305 "major": 25, 00:06:35.305 "minor": 1, 00:06:35.305 "patch": 0, 00:06:35.305 "suffix": "-pre", 00:06:35.305 "commit": "83e8405e4" 00:06:35.305 } 00:06:35.305 } 00:06:35.305 04:12:20 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:35.305 04:12:20 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:35.305 04:12:20 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:35.305 04:12:20 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:35.305 04:12:20 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:35.305 04:12:20 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:35.305 04:12:20 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:35.305 04:12:20 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.305 04:12:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.305 04:12:20 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.305 04:12:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:35.305 04:12:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:35.305 04:12:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.305 04:12:20 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:35.305 04:12:20 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.305 04:12:20 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:35.305 04:12:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.305 04:12:20 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:35.305 04:12:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.305 04:12:20 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:35.305 04:12:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.305 04:12:20 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:35.305 04:12:20 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:35.305 04:12:20 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.567 request: 00:06:35.567 { 00:06:35.567 "method": "env_dpdk_get_mem_stats", 00:06:35.567 "req_id": 1 00:06:35.567 } 00:06:35.567 Got JSON-RPC error response 00:06:35.567 response: 00:06:35.567 { 00:06:35.567 "code": -32601, 00:06:35.567 "message": "Method not found" 00:06:35.567 } 00:06:35.567 04:12:21 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:35.567 04:12:21 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.567 04:12:21 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:35.567 04:12:21 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.567 04:12:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71582 00:06:35.567 04:12:21 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71582 ']' 00:06:35.567 04:12:21 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71582 00:06:35.567 04:12:21 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:35.567 04:12:21 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.567 04:12:21 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71582 00:06:35.567 04:12:21 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.567 killing process with pid 71582 00:06:35.567 04:12:21 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.567 04:12:21 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71582' 00:06:35.567 04:12:21 app_cmdline -- common/autotest_common.sh@973 -- # kill 71582 00:06:35.567 04:12:21 app_cmdline -- common/autotest_common.sh@978 -- # wait 71582 00:06:35.828 00:06:35.828 real 0m1.778s 00:06:35.828 user 0m2.140s 00:06:35.828 sys 0m0.381s 00:06:35.828 04:12:21 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.828 04:12:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.828 ************************************ 00:06:35.828 END TEST app_cmdline 00:06:35.828 ************************************ 00:06:35.828 04:12:21 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:35.828 04:12:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.828 04:12:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.828 04:12:21 -- common/autotest_common.sh@10 -- # set +x 00:06:35.828 ************************************ 00:06:35.828 START TEST version 00:06:35.828 ************************************ 00:06:35.828 04:12:21 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:35.828 * Looking for test storage... 00:06:35.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:35.828 04:12:21 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.828 04:12:21 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:35.828 04:12:21 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.828 04:12:21 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:35.828 04:12:21 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.828 04:12:21 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.828 04:12:21 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.828 04:12:21 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.828 04:12:21 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.828 04:12:21 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.828 04:12:21 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.828 04:12:21 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.828 04:12:21 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.828 04:12:21 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.828 04:12:21 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.828 04:12:21 version -- scripts/common.sh@344 -- # case "$op" in 00:06:35.828 04:12:21 version -- scripts/common.sh@345 -- # : 1 00:06:35.828 04:12:21 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.828 04:12:21 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.828 04:12:21 version -- scripts/common.sh@365 -- # decimal 1 00:06:35.828 04:12:21 version -- scripts/common.sh@353 -- # local d=1 00:06:35.828 04:12:21 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.828 04:12:21 version -- scripts/common.sh@355 -- # echo 1 00:06:35.828 04:12:21 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.828 04:12:21 version -- scripts/common.sh@366 -- # decimal 2 00:06:35.828 04:12:21 version -- scripts/common.sh@353 -- # local d=2 00:06:35.828 04:12:21 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.828 04:12:21 version -- scripts/common.sh@355 -- # echo 2 00:06:35.828 04:12:21 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.828 04:12:21 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.828 04:12:21 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.828 04:12:21 version -- scripts/common.sh@368 -- # return 0 00:06:35.828 04:12:21 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.828 04:12:21 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:35.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.828 --rc genhtml_branch_coverage=1 00:06:35.828 --rc genhtml_function_coverage=1 00:06:35.828 --rc genhtml_legend=1 00:06:35.828 --rc geninfo_all_blocks=1 00:06:35.828 --rc geninfo_unexecuted_blocks=1 00:06:35.829 00:06:35.829 ' 00:06:35.829 04:12:21 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:35.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.829 --rc genhtml_branch_coverage=1 00:06:35.829 --rc genhtml_function_coverage=1 00:06:35.829 --rc genhtml_legend=1 00:06:35.829 --rc geninfo_all_blocks=1 00:06:35.829 --rc geninfo_unexecuted_blocks=1 00:06:35.829 00:06:35.829 ' 00:06:35.829 04:12:21 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:35.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.829 --rc genhtml_branch_coverage=1 00:06:35.829 --rc genhtml_function_coverage=1 00:06:35.829 --rc genhtml_legend=1 00:06:35.829 --rc geninfo_all_blocks=1 00:06:35.829 --rc geninfo_unexecuted_blocks=1 00:06:35.829 00:06:35.829 ' 00:06:35.829 04:12:21 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:35.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.829 --rc genhtml_branch_coverage=1 00:06:35.829 --rc genhtml_function_coverage=1 00:06:35.829 --rc genhtml_legend=1 00:06:35.829 --rc geninfo_all_blocks=1 00:06:35.829 --rc geninfo_unexecuted_blocks=1 00:06:35.829 00:06:35.829 ' 00:06:35.829 04:12:21 version -- app/version.sh@17 -- # get_header_version major 00:06:35.829 04:12:21 version -- app/version.sh@14 -- # cut -f2 00:06:35.829 04:12:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.829 04:12:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.090 04:12:21 version -- app/version.sh@17 -- # major=25 00:06:36.090 04:12:21 version -- app/version.sh@18 -- # get_header_version minor 00:06:36.090 04:12:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.090 04:12:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.090 04:12:21 version -- app/version.sh@14 -- # cut -f2 00:06:36.090 04:12:21 version -- app/version.sh@18 -- # minor=1 00:06:36.090 04:12:21 version -- app/version.sh@19 -- # get_header_version patch 00:06:36.090 04:12:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.090 04:12:21 version -- app/version.sh@14 -- # cut -f2 00:06:36.090 04:12:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.090 04:12:21 version -- app/version.sh@19 -- # patch=0 00:06:36.090 04:12:21 version -- app/version.sh@20 -- # get_header_version suffix 00:06:36.090 04:12:21 version -- app/version.sh@14 -- # cut -f2 00:06:36.090 04:12:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.090 04:12:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.090 04:12:21 version -- app/version.sh@20 -- # suffix=-pre 00:06:36.090 04:12:21 version -- app/version.sh@22 -- # version=25.1 00:06:36.090 04:12:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:36.090 04:12:21 version -- app/version.sh@28 -- # version=25.1rc0 00:06:36.090 04:12:21 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:36.090 04:12:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:36.090 04:12:21 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:36.090 04:12:21 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:36.090 00:06:36.090 real 0m0.204s 00:06:36.090 user 0m0.140s 00:06:36.090 sys 0m0.090s 00:06:36.090 04:12:21 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.090 ************************************ 00:06:36.090 END TEST version 00:06:36.090 ************************************ 00:06:36.090 04:12:21 version -- common/autotest_common.sh@10 -- # set +x 00:06:36.091 04:12:21 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:36.091 04:12:21 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:36.091 04:12:21 -- spdk/autotest.sh@194 -- # uname -s 00:06:36.091 04:12:21 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:36.091 04:12:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:36.091 04:12:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:36.091 04:12:21 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:36.091 04:12:21 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:36.091 04:12:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:36.091 04:12:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.091 04:12:21 -- common/autotest_common.sh@10 -- # set +x 00:06:36.091 ************************************ 00:06:36.091 START TEST blockdev_nvme 00:06:36.091 ************************************ 00:06:36.091 04:12:21 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:36.091 * Looking for test storage... 00:06:36.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:36.091 04:12:21 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.091 04:12:21 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.091 04:12:21 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:36.091 04:12:21 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.091 04:12:21 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:36.352 04:12:21 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:36.352 04:12:21 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.352 04:12:21 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:36.352 04:12:21 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.352 04:12:21 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.352 04:12:21 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.352 04:12:21 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:36.352 04:12:21 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.352 04:12:21 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.352 --rc genhtml_branch_coverage=1 00:06:36.353 --rc genhtml_function_coverage=1 00:06:36.353 --rc genhtml_legend=1 00:06:36.353 --rc geninfo_all_blocks=1 00:06:36.353 --rc geninfo_unexecuted_blocks=1 00:06:36.353 00:06:36.353 ' 00:06:36.353 04:12:21 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.353 --rc genhtml_branch_coverage=1 00:06:36.353 --rc genhtml_function_coverage=1 00:06:36.353 --rc genhtml_legend=1 00:06:36.353 --rc geninfo_all_blocks=1 00:06:36.353 --rc geninfo_unexecuted_blocks=1 00:06:36.353 00:06:36.353 ' 00:06:36.353 04:12:21 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.353 --rc genhtml_branch_coverage=1 00:06:36.353 --rc genhtml_function_coverage=1 00:06:36.353 --rc genhtml_legend=1 00:06:36.353 --rc geninfo_all_blocks=1 00:06:36.353 --rc geninfo_unexecuted_blocks=1 00:06:36.353 00:06:36.353 ' 00:06:36.353 04:12:21 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.353 --rc genhtml_branch_coverage=1 00:06:36.353 --rc genhtml_function_coverage=1 00:06:36.353 --rc genhtml_legend=1 00:06:36.353 --rc geninfo_all_blocks=1 00:06:36.353 --rc geninfo_unexecuted_blocks=1 00:06:36.353 00:06:36.353 ' 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:36.353 04:12:21 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71743 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71743 00:06:36.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.353 04:12:21 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71743 ']' 00:06:36.353 04:12:21 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.353 04:12:21 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.353 04:12:21 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.353 04:12:21 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:36.353 04:12:21 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.353 04:12:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:36.353 [2024-11-17 04:12:21.900109] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:36.353 [2024-11-17 04:12:21.900222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71743 ] 00:06:36.353 [2024-11-17 04:12:22.058679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.615 [2024-11-17 04:12:22.077524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.187 04:12:22 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.187 04:12:22 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:37.187 04:12:22 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:37.187 04:12:22 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:37.187 04:12:22 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:37.187 04:12:22 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:37.187 04:12:22 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:37.187 04:12:22 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:37.187 04:12:22 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.187 04:12:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.448 04:12:23 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.448 04:12:23 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:37.449 04:12:23 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.449 04:12:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.449 04:12:23 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.449 04:12:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:37.449 04:12:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:37.449 04:12:23 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.449 04:12:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.449 04:12:23 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.449 04:12:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:37.449 04:12:23 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.449 04:12:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.449 04:12:23 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.449 04:12:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:37.449 04:12:23 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.449 04:12:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.449 04:12:23 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.449 04:12:23 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:37.449 04:12:23 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:37.449 04:12:23 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:37.449 04:12:23 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.449 04:12:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.711 04:12:23 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.711 04:12:23 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:37.711 04:12:23 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:37.711 04:12:23 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "7970fc66-0cdc-47ed-adce-b01b3b94986c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7970fc66-0cdc-47ed-adce-b01b3b94986c",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "4a36bef4-8e38-4f8d-84e3-90e60a70d9a9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4a36bef4-8e38-4f8d-84e3-90e60a70d9a9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "00621f4c-2189-4821-852c-0ed20e428bd3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "00621f4c-2189-4821-852c-0ed20e428bd3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "885fdc72-3bff-4d55-8dfd-ca51ef686fd6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "885fdc72-3bff-4d55-8dfd-ca51ef686fd6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "26c0a5ba-76e9-4df6-8fdb-a31aa80a4fe2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "26c0a5ba-76e9-4df6-8fdb-a31aa80a4fe2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "41cafc69-22a7-4907-80a1-c7519f3ca476"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "41cafc69-22a7-4907-80a1-c7519f3ca476",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:37.711 04:12:23 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:37.711 04:12:23 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:37.711 04:12:23 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:37.711 04:12:23 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 71743 00:06:37.711 04:12:23 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71743 ']' 00:06:37.711 04:12:23 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71743 00:06:37.711 04:12:23 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:37.711 04:12:23 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.711 04:12:23 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71743 00:06:37.711 04:12:23 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.711 04:12:23 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.711 killing process with pid 71743 00:06:37.711 04:12:23 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71743' 00:06:37.711 04:12:23 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71743 00:06:37.711 04:12:23 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71743 00:06:37.973 04:12:23 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:37.973 04:12:23 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:37.973 04:12:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:37.973 04:12:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.973 04:12:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.973 ************************************ 00:06:37.973 START TEST bdev_hello_world 00:06:37.973 ************************************ 00:06:37.973 04:12:23 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:37.973 [2024-11-17 04:12:23.687024] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:37.973 [2024-11-17 04:12:23.687163] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71816 ] 00:06:38.234 [2024-11-17 04:12:23.846360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.234 [2024-11-17 04:12:23.875040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.806 [2024-11-17 04:12:24.277998] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:38.806 [2024-11-17 04:12:24.278070] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:38.806 [2024-11-17 04:12:24.278096] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:38.807 [2024-11-17 04:12:24.280527] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:38.807 [2024-11-17 04:12:24.281313] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:38.807 [2024-11-17 04:12:24.281351] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:38.807 [2024-11-17 04:12:24.281976] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:38.807 00:06:38.807 [2024-11-17 04:12:24.282006] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:38.807 ************************************ 00:06:38.807 END TEST bdev_hello_world 00:06:38.807 ************************************ 00:06:38.807 00:06:38.807 real 0m0.859s 00:06:38.807 user 0m0.562s 00:06:38.807 sys 0m0.191s 00:06:38.807 04:12:24 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.807 04:12:24 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:39.069 04:12:24 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:39.069 04:12:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:39.069 04:12:24 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.069 04:12:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:39.069 ************************************ 00:06:39.069 START TEST bdev_bounds 00:06:39.069 ************************************ 00:06:39.069 04:12:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:39.069 Process bdevio pid: 71847 00:06:39.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.069 04:12:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71847 00:06:39.069 04:12:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.069 04:12:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71847' 00:06:39.069 04:12:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71847 00:06:39.069 04:12:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:39.069 04:12:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71847 ']' 00:06:39.069 04:12:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.069 04:12:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.069 04:12:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.069 04:12:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.069 04:12:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:39.069 [2024-11-17 04:12:24.617454] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:39.069 [2024-11-17 04:12:24.617610] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71847 ] 00:06:39.069 [2024-11-17 04:12:24.779218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.329 [2024-11-17 04:12:24.813139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.330 [2024-11-17 04:12:24.813535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.330 [2024-11-17 04:12:24.813560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.904 04:12:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.904 04:12:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:39.904 04:12:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:39.904 I/O targets: 00:06:39.904 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:39.904 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:39.904 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.904 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.904 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.904 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:39.904 00:06:39.904 00:06:39.904 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.904 http://cunit.sourceforge.net/ 00:06:39.904 00:06:39.904 00:06:39.904 Suite: bdevio tests on: Nvme3n1 00:06:39.904 Test: blockdev write read block ...passed 00:06:39.904 Test: blockdev write zeroes read block ...passed 00:06:39.904 Test: blockdev write zeroes read no split ...passed 00:06:39.904 Test: blockdev write zeroes read split ...passed 00:06:39.904 Test: blockdev write zeroes read split partial ...passed 00:06:39.904 Test: blockdev reset ...[2024-11-17 04:12:25.605361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:39.904 passed 00:06:39.904 Test: blockdev write read 8 blocks ...[2024-11-17 04:12:25.608892] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:39.904 passed 00:06:39.904 Test: blockdev write read size > 128k ...passed 00:06:39.904 Test: blockdev write read invalid size ...passed 00:06:39.904 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.904 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.904 Test: blockdev write read max offset ...passed 00:06:39.904 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.904 Test: blockdev writev readv 8 blocks ...passed 00:06:39.904 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.904 Test: blockdev writev readv block ...passed 00:06:39.904 Test: blockdev writev readv size > 128k ...passed 00:06:39.904 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.904 Test: blockdev comparev and writev ...[2024-11-17 04:12:25.626327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:39.904 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2cf606000 len:0x1000 00:06:39.904 [2024-11-17 04:12:25.626423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.904 passed 00:06:40.167 Test: blockdev nvme passthru vendor specific ...passed 00:06:40.167 Test: blockdev nvme admin passthru ...[2024-11-17 04:12:25.629208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:40.167 [2024-11-17 04:12:25.629259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:40.167 passed 00:06:40.167 Test: blockdev copy ...passed 00:06:40.167 Suite: bdevio tests on: Nvme2n3 00:06:40.167 Test: blockdev write read block ...passed 00:06:40.167 Test: blockdev write zeroes read block ...passed 00:06:40.167 Test: blockdev write zeroes read no split ...passed 00:06:40.167 Test: blockdev write zeroes read split ...passed 00:06:40.167 Test: blockdev write zeroes read split partial ...passed 00:06:40.167 Test: blockdev reset ...[2024-11-17 04:12:25.659584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:40.167 [2024-11-17 04:12:25.662210] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:40.167 passed 00:06:40.167 Test: blockdev write read 8 blocks ...passed 00:06:40.167 Test: blockdev write read size > 128k ...passed 00:06:40.167 Test: blockdev write read invalid size ...passed 00:06:40.167 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.167 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.167 Test: blockdev write read max offset ...passed 00:06:40.167 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.167 Test: blockdev writev readv 8 blocks ...passed 00:06:40.167 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.167 Test: blockdev writev readv block ...passed 00:06:40.167 Test: blockdev writev readv size > 128k ...passed 00:06:40.167 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.167 Test: blockdev comparev and writev ...[2024-11-17 04:12:25.679561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cb805000 len:0x1000 00:06:40.167 [2024-11-17 04:12:25.679614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.167 passed 00:06:40.167 Test: blockdev nvme passthru rw ...passed 00:06:40.167 Test: blockdev nvme passthru vendor specific ...passed 00:06:40.167 Test: blockdev nvme admin passthru ...[2024-11-17 04:12:25.681937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:40.167 [2024-11-17 04:12:25.681978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:40.167 passed 00:06:40.167 Test: blockdev copy ...passed 00:06:40.167 Suite: bdevio tests on: Nvme2n2 00:06:40.167 Test: blockdev write read block ...passed 00:06:40.167 Test: blockdev write zeroes read block ...passed 00:06:40.167 Test: blockdev write zeroes read no split ...passed 00:06:40.167 Test: blockdev write zeroes read split ...passed 00:06:40.167 Test: blockdev write zeroes read split partial ...passed 00:06:40.167 Test: blockdev reset ...[2024-11-17 04:12:25.713765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:40.167 passed 00:06:40.167 Test: blockdev write read 8 blocks ...[2024-11-17 04:12:25.716532] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:40.167 passed 00:06:40.167 Test: blockdev write read size > 128k ...passed 00:06:40.167 Test: blockdev write read invalid size ...passed 00:06:40.167 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.167 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.167 Test: blockdev write read max offset ...passed 00:06:40.167 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.167 Test: blockdev writev readv 8 blocks ...passed 00:06:40.167 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.167 Test: blockdev writev readv block ...passed 00:06:40.167 Test: blockdev writev readv size > 128k ...passed 00:06:40.167 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.167 Test: blockdev comparev and writev ...[2024-11-17 04:12:25.731289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e4636000 len:0x1000 00:06:40.167 [2024-11-17 04:12:25.731341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.167 passed 00:06:40.167 Test: blockdev nvme passthru rw ...passed 00:06:40.167 Test: blockdev nvme passthru vendor specific ...passed 00:06:40.167 Test: blockdev nvme admin passthru ...[2024-11-17 04:12:25.733960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:40.167 [2024-11-17 04:12:25.734009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:40.167 passed 00:06:40.167 Test: blockdev copy ...passed 00:06:40.167 Suite: bdevio tests on: Nvme2n1 00:06:40.167 Test: blockdev write read block ...passed 00:06:40.167 Test: blockdev write zeroes read block ...passed 00:06:40.168 Test: blockdev write zeroes read no split ...passed 00:06:40.168 Test: blockdev write zeroes read split ...passed 00:06:40.168 Test: blockdev write zeroes read split partial ...passed 00:06:40.168 Test: blockdev reset ...[2024-11-17 04:12:25.768910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:40.168 [2024-11-17 04:12:25.771298] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:40.168 Test: blockdev write read 8 blocks ...uccessful. 00:06:40.168 passed 00:06:40.168 Test: blockdev write read size > 128k ...passed 00:06:40.168 Test: blockdev write read invalid size ...passed 00:06:40.168 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.168 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.168 Test: blockdev write read max offset ...passed 00:06:40.168 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.168 Test: blockdev writev readv 8 blocks ...passed 00:06:40.168 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.168 Test: blockdev writev readv block ...passed 00:06:40.168 Test: blockdev writev readv size > 128k ...passed 00:06:40.168 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.168 Test: blockdev comparev and writev ...[2024-11-17 04:12:25.789994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e4630000 len:0x1000 00:06:40.168 [2024-11-17 04:12:25.790054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.168 passed 00:06:40.168 Test: blockdev nvme passthru rw ...passed 00:06:40.168 Test: blockdev nvme passthru vendor specific ...passed 00:06:40.168 Test: blockdev nvme admin passthru ...[2024-11-17 04:12:25.793215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:40.168 [2024-11-17 04:12:25.793262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:40.168 passed 00:06:40.168 Test: blockdev copy ...passed 00:06:40.168 Suite: bdevio tests on: Nvme1n1 00:06:40.168 Test: blockdev write read block ...passed 00:06:40.168 Test: blockdev write zeroes read block ...passed 00:06:40.168 Test: blockdev write zeroes read no split ...passed 00:06:40.168 Test: blockdev write zeroes read split ...passed 00:06:40.168 Test: blockdev write zeroes read split partial ...passed 00:06:40.168 Test: blockdev reset ...[2024-11-17 04:12:25.823214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:40.168 [2024-11-17 04:12:25.825885] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:40.168 passed 00:06:40.168 Test: blockdev write read 8 blocks ...passed 00:06:40.168 Test: blockdev write read size > 128k ...passed 00:06:40.168 Test: blockdev write read invalid size ...passed 00:06:40.168 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.168 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.168 Test: blockdev write read max offset ...passed 00:06:40.168 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.168 Test: blockdev writev readv 8 blocks ...passed 00:06:40.168 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.168 Test: blockdev writev readv block ...passed 00:06:40.168 Test: blockdev writev readv size > 128k ...passed 00:06:40.168 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.168 Test: blockdev comparev and writev ...[2024-11-17 04:12:25.844055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e462c000 len:0x1000 00:06:40.168 [2024-11-17 04:12:25.844110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.168 passed 00:06:40.168 Test: blockdev nvme passthru rw ...passed 00:06:40.168 Test: blockdev nvme passthru vendor specific ...[2024-11-17 04:12:25.846889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:40.168 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:40.168 [2024-11-17 04:12:25.847037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:40.168 passed 00:06:40.168 Test: blockdev copy ...passed 00:06:40.168 Suite: bdevio tests on: Nvme0n1 00:06:40.168 Test: blockdev write read block ...passed 00:06:40.168 Test: blockdev write zeroes read block ...passed 00:06:40.168 Test: blockdev write zeroes read no split ...passed 00:06:40.168 Test: blockdev write zeroes read split ...passed 00:06:40.168 Test: blockdev write zeroes read split partial ...passed 00:06:40.168 Test: blockdev reset ...[2024-11-17 04:12:25.879913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:40.168 [2024-11-17 04:12:25.882424] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:40.168 Test: blockdev write read 8 blocks ...uccessful. 00:06:40.168 passed 00:06:40.168 Test: blockdev write read size > 128k ...passed 00:06:40.168 Test: blockdev write read invalid size ...passed 00:06:40.168 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.168 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.168 Test: blockdev write read max offset ...passed 00:06:40.168 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.430 Test: blockdev writev readv 8 blocks ...passed 00:06:40.430 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.430 Test: blockdev writev readv block ...passed 00:06:40.430 Test: blockdev writev readv size > 128k ...passed 00:06:40.430 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.430 Test: blockdev comparev and writev ...passed 00:06:40.430 Test: blockdev nvme passthru rw ...[2024-11-17 04:12:25.897666] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:40.430 separate metadata which is not supported yet. 00:06:40.430 passed 00:06:40.430 Test: blockdev nvme passthru vendor specific ...passed 00:06:40.430 Test: blockdev nvme admin passthru ...[2024-11-17 04:12:25.899665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:40.430 [2024-11-17 04:12:25.899720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:40.430 passed 00:06:40.430 Test: blockdev copy ...passed 00:06:40.430 00:06:40.430 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.430 suites 6 6 n/a 0 0 00:06:40.430 tests 138 138 138 0 0 00:06:40.430 asserts 893 893 893 0 n/a 00:06:40.430 00:06:40.430 Elapsed time = 0.708 seconds 00:06:40.430 0 00:06:40.430 04:12:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71847 00:06:40.430 04:12:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71847 ']' 00:06:40.430 04:12:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71847 00:06:40.430 04:12:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:40.430 04:12:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.430 04:12:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71847 00:06:40.430 killing process with pid 71847 00:06:40.430 04:12:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.430 04:12:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.430 04:12:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71847' 00:06:40.430 04:12:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71847 00:06:40.430 04:12:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71847 00:06:40.430 04:12:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:40.430 00:06:40.430 real 0m1.588s 00:06:40.430 user 0m3.919s 00:06:40.430 sys 0m0.333s 00:06:40.430 04:12:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.430 ************************************ 00:06:40.430 END TEST bdev_bounds 00:06:40.430 ************************************ 00:06:40.430 04:12:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:40.693 04:12:26 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:40.693 04:12:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:40.693 04:12:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.693 04:12:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.693 ************************************ 00:06:40.693 START TEST bdev_nbd 00:06:40.693 ************************************ 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:40.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71890 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71890 /var/tmp/spdk-nbd.sock 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 71890 ']' 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:40.693 04:12:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:40.693 [2024-11-17 04:12:26.272323] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:40.693 [2024-11-17 04:12:26.272499] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.954 [2024-11-17 04:12:26.427473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.954 [2024-11-17 04:12:26.457878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.571 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.571 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:41.571 04:12:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:41.571 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.571 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:41.571 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:41.571 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:41.571 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.571 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:41.571 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:41.571 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:41.571 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:41.571 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:41.571 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:41.571 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:41.850 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:41.850 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.851 1+0 records in 00:06:41.851 1+0 records out 00:06:41.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028395 s, 14.4 MB/s 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:41.851 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.112 1+0 records in 00:06:42.112 1+0 records out 00:06:42.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000896035 s, 4.6 MB/s 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:42.112 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.370 1+0 records in 00:06:42.370 1+0 records out 00:06:42.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429796 s, 9.5 MB/s 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:42.370 04:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.631 1+0 records in 00:06:42.631 1+0 records out 00:06:42.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135399 s, 3.0 MB/s 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:42.631 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.892 1+0 records in 00:06:42.892 1+0 records out 00:06:42.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000758477 s, 5.4 MB/s 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:42.892 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.153 1+0 records in 00:06:43.153 1+0 records out 00:06:43.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000869514 s, 4.7 MB/s 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:43.153 { 00:06:43.153 "nbd_device": "/dev/nbd0", 00:06:43.153 "bdev_name": "Nvme0n1" 00:06:43.153 }, 00:06:43.153 { 00:06:43.153 "nbd_device": "/dev/nbd1", 00:06:43.153 "bdev_name": "Nvme1n1" 00:06:43.153 }, 00:06:43.153 { 00:06:43.153 "nbd_device": "/dev/nbd2", 00:06:43.153 "bdev_name": "Nvme2n1" 00:06:43.153 }, 00:06:43.153 { 00:06:43.153 "nbd_device": "/dev/nbd3", 00:06:43.153 "bdev_name": "Nvme2n2" 00:06:43.153 }, 00:06:43.153 { 00:06:43.153 "nbd_device": "/dev/nbd4", 00:06:43.153 "bdev_name": "Nvme2n3" 00:06:43.153 }, 00:06:43.153 { 00:06:43.153 "nbd_device": "/dev/nbd5", 00:06:43.153 "bdev_name": "Nvme3n1" 00:06:43.153 } 00:06:43.153 ]' 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:43.153 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:43.153 { 00:06:43.153 "nbd_device": "/dev/nbd0", 00:06:43.153 "bdev_name": "Nvme0n1" 00:06:43.153 }, 00:06:43.153 { 00:06:43.153 "nbd_device": "/dev/nbd1", 00:06:43.153 "bdev_name": "Nvme1n1" 00:06:43.153 }, 00:06:43.153 { 00:06:43.153 "nbd_device": "/dev/nbd2", 00:06:43.153 "bdev_name": "Nvme2n1" 00:06:43.153 }, 00:06:43.153 { 00:06:43.153 "nbd_device": "/dev/nbd3", 00:06:43.153 "bdev_name": "Nvme2n2" 00:06:43.153 }, 00:06:43.153 { 00:06:43.153 "nbd_device": "/dev/nbd4", 00:06:43.154 "bdev_name": "Nvme2n3" 00:06:43.154 }, 00:06:43.154 { 00:06:43.154 "nbd_device": "/dev/nbd5", 00:06:43.154 "bdev_name": "Nvme3n1" 00:06:43.154 } 00:06:43.154 ]' 00:06:43.154 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:43.154 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:43.154 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.154 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:43.154 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.154 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:43.154 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.154 04:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:43.415 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:43.415 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:43.415 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:43.415 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.415 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.415 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:43.415 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.415 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.415 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.415 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:43.684 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:43.684 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:43.684 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:43.684 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.684 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.684 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:43.684 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.684 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.684 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.684 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:43.952 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:43.952 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:43.952 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:43.952 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.952 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.952 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:43.952 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.952 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.952 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.952 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:44.213 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:44.213 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:44.213 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:44.213 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.213 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.213 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:44.213 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.213 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.213 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.213 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:44.474 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:44.474 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:44.474 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:44.474 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.474 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.474 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:44.474 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.474 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.474 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.474 04:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:44.474 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:44.735 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:44.735 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:44.735 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.735 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.735 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:44.735 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.735 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.735 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.735 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:44.736 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:44.997 /dev/nbd0 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:44.997 1+0 records in 00:06:44.997 1+0 records out 00:06:44.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000773444 s, 5.3 MB/s 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:44.997 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:45.259 /dev/nbd1 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.259 1+0 records in 00:06:45.259 1+0 records out 00:06:45.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339215 s, 12.1 MB/s 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:45.259 04:12:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:45.519 /dev/nbd10 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.519 1+0 records in 00:06:45.519 1+0 records out 00:06:45.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000975273 s, 4.2 MB/s 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:45.519 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:45.778 /dev/nbd11 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.778 1+0 records in 00:06:45.778 1+0 records out 00:06:45.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000880794 s, 4.7 MB/s 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:45.778 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:46.036 /dev/nbd12 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.036 1+0 records in 00:06:46.036 1+0 records out 00:06:46.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000822189 s, 5.0 MB/s 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:46.036 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:46.295 /dev/nbd13 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.295 1+0 records in 00:06:46.295 1+0 records out 00:06:46.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391468 s, 10.5 MB/s 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.295 04:12:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.553 { 00:06:46.553 "nbd_device": "/dev/nbd0", 00:06:46.553 "bdev_name": "Nvme0n1" 00:06:46.553 }, 00:06:46.553 { 00:06:46.553 "nbd_device": "/dev/nbd1", 00:06:46.553 "bdev_name": "Nvme1n1" 00:06:46.553 }, 00:06:46.553 { 00:06:46.553 "nbd_device": "/dev/nbd10", 00:06:46.553 "bdev_name": "Nvme2n1" 00:06:46.553 }, 00:06:46.553 { 00:06:46.553 "nbd_device": "/dev/nbd11", 00:06:46.553 "bdev_name": "Nvme2n2" 00:06:46.553 }, 00:06:46.553 { 00:06:46.553 "nbd_device": "/dev/nbd12", 00:06:46.553 "bdev_name": "Nvme2n3" 00:06:46.553 }, 00:06:46.553 { 00:06:46.553 "nbd_device": "/dev/nbd13", 00:06:46.553 "bdev_name": "Nvme3n1" 00:06:46.553 } 00:06:46.553 ]' 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.553 { 00:06:46.553 "nbd_device": "/dev/nbd0", 00:06:46.553 "bdev_name": "Nvme0n1" 00:06:46.553 }, 00:06:46.553 { 00:06:46.553 "nbd_device": "/dev/nbd1", 00:06:46.553 "bdev_name": "Nvme1n1" 00:06:46.553 }, 00:06:46.553 { 00:06:46.553 "nbd_device": "/dev/nbd10", 00:06:46.553 "bdev_name": "Nvme2n1" 00:06:46.553 }, 00:06:46.553 { 00:06:46.553 "nbd_device": "/dev/nbd11", 00:06:46.553 "bdev_name": "Nvme2n2" 00:06:46.553 }, 00:06:46.553 { 00:06:46.553 "nbd_device": "/dev/nbd12", 00:06:46.553 "bdev_name": "Nvme2n3" 00:06:46.553 }, 00:06:46.553 { 00:06:46.553 "nbd_device": "/dev/nbd13", 00:06:46.553 "bdev_name": "Nvme3n1" 00:06:46.553 } 00:06:46.553 ]' 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.553 /dev/nbd1 00:06:46.553 /dev/nbd10 00:06:46.553 /dev/nbd11 00:06:46.553 /dev/nbd12 00:06:46.553 /dev/nbd13' 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.553 /dev/nbd1 00:06:46.553 /dev/nbd10 00:06:46.553 /dev/nbd11 00:06:46.553 /dev/nbd12 00:06:46.553 /dev/nbd13' 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:46.553 256+0 records in 00:06:46.553 256+0 records out 00:06:46.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00771515 s, 136 MB/s 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.553 256+0 records in 00:06:46.553 256+0 records out 00:06:46.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0660335 s, 15.9 MB/s 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.553 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.812 256+0 records in 00:06:46.812 256+0 records out 00:06:46.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0635268 s, 16.5 MB/s 00:06:46.812 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.812 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:46.812 256+0 records in 00:06:46.812 256+0 records out 00:06:46.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134942 s, 7.8 MB/s 00:06:46.812 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.812 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:46.812 256+0 records in 00:06:46.812 256+0 records out 00:06:46.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0815755 s, 12.9 MB/s 00:06:46.812 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.812 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:47.073 256+0 records in 00:06:47.073 256+0 records out 00:06:47.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225413 s, 4.7 MB/s 00:06:47.073 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.073 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:47.333 256+0 records in 00:06:47.333 256+0 records out 00:06:47.333 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.220042 s, 4.8 MB/s 00:06:47.333 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:47.333 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:47.333 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.333 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.333 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:47.333 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.333 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.333 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.333 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:47.333 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.333 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:47.333 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.333 04:12:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:47.333 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.333 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:47.333 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.333 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:47.333 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.333 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:47.333 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:47.333 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:47.333 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.333 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:47.333 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.333 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:47.333 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.333 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.593 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.593 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.593 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.593 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.593 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.593 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.593 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.593 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.593 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.593 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.854 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.854 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.854 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.854 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.854 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.854 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.854 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.854 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.854 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.854 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:48.114 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:48.114 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:48.114 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:48.114 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.114 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.114 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:48.114 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.114 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.114 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.114 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:48.373 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:48.373 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:48.373 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:48.373 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.373 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.373 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:48.373 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.373 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.373 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.373 04:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:48.634 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:48.634 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:48.634 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:48.634 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.634 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.635 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:48.893 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:49.154 malloc_lvol_verify 00:06:49.154 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:49.414 2b5a3482-3c0a-449a-babb-2154d01dba14 00:06:49.414 04:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:49.675 44a9a926-c003-46a1-8b86-c6ea352eaada 00:06:49.675 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:49.675 /dev/nbd0 00:06:49.675 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:49.675 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:49.675 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:49.675 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:49.675 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:49.675 mke2fs 1.47.0 (5-Feb-2023) 00:06:49.675 Discarding device blocks: 0/4096 done 00:06:49.675 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:49.675 00:06:49.675 Allocating group tables: 0/1 done 00:06:49.675 Writing inode tables: 0/1 done 00:06:49.936 Creating journal (1024 blocks): done 00:06:49.936 Writing superblocks and filesystem accounting information: 0/1 done 00:06:49.936 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71890 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 71890 ']' 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 71890 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71890 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.936 killing process with pid 71890 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71890' 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 71890 00:06:49.936 04:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 71890 00:06:50.197 04:12:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:50.197 00:06:50.197 real 0m9.632s 00:06:50.197 user 0m13.871s 00:06:50.197 sys 0m3.245s 00:06:50.197 04:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.197 ************************************ 00:06:50.197 END TEST bdev_nbd 00:06:50.197 ************************************ 00:06:50.197 04:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:50.197 skipping fio tests on NVMe due to multi-ns failures. 00:06:50.197 04:12:35 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:50.197 04:12:35 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:50.197 04:12:35 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:50.197 04:12:35 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:50.197 04:12:35 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:50.197 04:12:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:50.197 04:12:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.197 04:12:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:50.197 ************************************ 00:06:50.197 START TEST bdev_verify 00:06:50.197 ************************************ 00:06:50.197 04:12:35 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:50.457 [2024-11-17 04:12:35.955355] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:50.457 [2024-11-17 04:12:35.955483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72269 ] 00:06:50.457 [2024-11-17 04:12:36.113307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.457 [2024-11-17 04:12:36.133967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.457 [2024-11-17 04:12:36.134002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.028 Running I/O for 5 seconds... 00:06:52.982 17408.00 IOPS, 68.00 MiB/s [2024-11-17T04:12:40.094Z] 18304.00 IOPS, 71.50 MiB/s [2024-11-17T04:12:41.034Z] 19029.33 IOPS, 74.33 MiB/s [2024-11-17T04:12:41.971Z] 18960.00 IOPS, 74.06 MiB/s [2024-11-17T04:12:41.971Z] 18944.00 IOPS, 74.00 MiB/s 00:06:56.244 Latency(us) 00:06:56.244 [2024-11-17T04:12:41.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.244 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:56.244 Verification LBA range: start 0x0 length 0xbd0bd 00:06:56.244 Nvme0n1 : 5.05 1547.40 6.04 0.00 0.00 82383.84 18955.03 94775.14 00:06:56.244 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:56.244 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:56.244 Nvme0n1 : 5.04 1548.13 6.05 0.00 0.00 82340.03 16938.54 93161.94 00:06:56.244 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:56.244 Verification LBA range: start 0x0 length 0xa0000 00:06:56.245 Nvme1n1 : 5.05 1546.98 6.04 0.00 0.00 82298.65 20568.22 90338.86 00:06:56.245 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:56.245 Verification LBA range: start 0xa0000 length 0xa0000 00:06:56.245 Nvme1n1 : 5.07 1552.66 6.07 0.00 0.00 81785.95 6175.51 77030.01 00:06:56.245 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:56.245 Verification LBA range: start 0x0 length 0x80000 00:06:56.245 Nvme2n1 : 5.08 1548.63 6.05 0.00 0.00 81947.81 8872.57 86709.17 00:06:56.245 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:56.245 Verification LBA range: start 0x80000 length 0x80000 00:06:56.245 Nvme2n1 : 5.08 1561.31 6.10 0.00 0.00 81342.09 10435.35 67350.84 00:06:56.245 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:56.245 Verification LBA range: start 0x0 length 0x80000 00:06:56.245 Nvme2n2 : 5.09 1557.73 6.08 0.00 0.00 81424.97 7461.02 75416.81 00:06:56.245 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:56.245 Verification LBA range: start 0x80000 length 0x80000 00:06:56.245 Nvme2n2 : 5.09 1560.41 6.10 0.00 0.00 81213.08 12048.54 68964.04 00:06:56.245 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:56.245 Verification LBA range: start 0x0 length 0x80000 00:06:56.245 Nvme2n3 : 5.10 1557.32 6.08 0.00 0.00 81275.24 7763.50 75416.81 00:06:56.245 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:56.245 Verification LBA range: start 0x80000 length 0x80000 00:06:56.245 Nvme2n3 : 5.09 1559.96 6.09 0.00 0.00 81040.10 12300.60 72593.72 00:06:56.245 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:56.245 Verification LBA range: start 0x0 length 0x20000 00:06:56.245 Nvme3n1 : 5.10 1556.92 6.08 0.00 0.00 81131.14 8015.56 71383.83 00:06:56.245 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:56.245 Verification LBA range: start 0x20000 length 0x20000 00:06:56.245 Nvme3n1 : 5.09 1559.51 6.09 0.00 0.00 80899.47 11796.48 72593.72 00:06:56.245 [2024-11-17T04:12:41.972Z] =================================================================================================================== 00:06:56.245 [2024-11-17T04:12:41.972Z] Total : 18656.95 72.88 0.00 0.00 81586.78 6175.51 94775.14 00:06:57.234 00:06:57.234 real 0m6.726s 00:06:57.234 user 0m12.733s 00:06:57.234 sys 0m0.216s 00:06:57.234 04:12:42 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.234 ************************************ 00:06:57.234 END TEST bdev_verify 00:06:57.234 04:12:42 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:57.234 ************************************ 00:06:57.234 04:12:42 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:57.234 04:12:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:57.234 04:12:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.234 04:12:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.234 ************************************ 00:06:57.234 START TEST bdev_verify_big_io 00:06:57.234 ************************************ 00:06:57.234 04:12:42 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:57.234 [2024-11-17 04:12:42.749579] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:57.234 [2024-11-17 04:12:42.749707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72356 ] 00:06:57.234 [2024-11-17 04:12:42.899205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.234 [2024-11-17 04:12:42.919344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.234 [2024-11-17 04:12:42.919369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.802 Running I/O for 5 seconds... 00:07:00.641 0.00 IOPS, 0.00 MiB/s [2024-11-17T04:12:48.904Z] 856.50 IOPS, 53.53 MiB/s [2024-11-17T04:12:49.473Z] 1323.33 IOPS, 82.71 MiB/s [2024-11-17T04:12:49.473Z] 1761.75 IOPS, 110.11 MiB/s 00:07:03.746 Latency(us) 00:07:03.746 [2024-11-17T04:12:49.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:03.746 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:03.746 Verification LBA range: start 0x0 length 0xbd0b 00:07:03.746 Nvme0n1 : 5.81 110.21 6.89 0.00 0.00 1123490.26 26819.35 1206669.00 00:07:03.746 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:03.746 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:03.746 Nvme0n1 : 5.86 104.50 6.53 0.00 0.00 1169279.79 18753.38 1213121.77 00:07:03.746 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:03.746 Verification LBA range: start 0x0 length 0xa000 00:07:03.746 Nvme1n1 : 5.81 110.16 6.89 0.00 0.00 1082700.72 108890.58 1000180.18 00:07:03.746 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:03.746 Verification LBA range: start 0xa000 length 0xa000 00:07:03.746 Nvme1n1 : 5.86 104.99 6.56 0.00 0.00 1123514.10 111310.38 993727.41 00:07:03.746 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:03.746 Verification LBA range: start 0x0 length 0x8000 00:07:03.746 Nvme2n1 : 5.81 110.09 6.88 0.00 0.00 1043416.30 147607.24 967916.31 00:07:03.746 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:03.746 Verification LBA range: start 0x8000 length 0x8000 00:07:03.746 Nvme2n1 : 5.86 109.18 6.82 0.00 0.00 1055014.91 75416.81 987274.63 00:07:03.746 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:03.746 Verification LBA range: start 0x0 length 0x8000 00:07:03.746 Nvme2n2 : 5.96 118.19 7.39 0.00 0.00 946979.02 44362.83 1000180.18 00:07:03.746 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:03.746 Verification LBA range: start 0x8000 length 0x8000 00:07:03.746 Nvme2n2 : 5.96 111.33 6.96 0.00 0.00 991963.66 74206.92 987274.63 00:07:03.746 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:03.746 Verification LBA range: start 0x0 length 0x8000 00:07:03.746 Nvme2n3 : 6.02 123.56 7.72 0.00 0.00 874396.33 31053.98 1025991.29 00:07:03.746 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:03.746 Verification LBA range: start 0x8000 length 0x8000 00:07:03.746 Nvme2n3 : 6.03 123.18 7.70 0.00 0.00 871331.01 21878.94 987274.63 00:07:03.746 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:03.746 Verification LBA range: start 0x0 length 0x2000 00:07:03.746 Nvme3n1 : 6.03 138.00 8.62 0.00 0.00 758933.20 1121.67 1058255.16 00:07:03.746 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:03.746 Verification LBA range: start 0x2000 length 0x2000 00:07:03.746 Nvme3n1 : 6.06 140.25 8.77 0.00 0.00 740501.26 740.43 2168132.53 00:07:03.746 [2024-11-17T04:12:49.473Z] =================================================================================================================== 00:07:03.746 [2024-11-17T04:12:49.473Z] Total : 1403.64 87.73 0.00 0.00 966559.18 740.43 2168132.53 00:07:05.130 00:07:05.130 real 0m8.038s 00:07:05.130 user 0m15.373s 00:07:05.130 sys 0m0.203s 00:07:05.130 04:12:50 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.130 ************************************ 00:07:05.130 END TEST bdev_verify_big_io 00:07:05.130 ************************************ 00:07:05.130 04:12:50 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:05.130 04:12:50 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:05.130 04:12:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:05.130 04:12:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.130 04:12:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.130 ************************************ 00:07:05.130 START TEST bdev_write_zeroes 00:07:05.130 ************************************ 00:07:05.130 04:12:50 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:05.391 [2024-11-17 04:12:50.856476] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:05.391 [2024-11-17 04:12:50.856593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72456 ] 00:07:05.391 [2024-11-17 04:12:51.017277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.391 [2024-11-17 04:12:51.038551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.962 Running I/O for 1 seconds... 00:07:06.903 52992.00 IOPS, 207.00 MiB/s 00:07:06.903 Latency(us) 00:07:06.903 [2024-11-17T04:12:52.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.903 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:06.903 Nvme0n1 : 1.02 8837.60 34.52 0.00 0.00 14453.08 5570.56 24097.08 00:07:06.903 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:06.903 Nvme1n1 : 1.02 8827.40 34.48 0.00 0.00 14448.92 10687.41 22483.89 00:07:06.903 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:06.903 Nvme2n1 : 1.02 8817.12 34.44 0.00 0.00 14395.12 10334.52 22887.19 00:07:06.903 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:06.903 Nvme2n2 : 1.02 8807.08 34.40 0.00 0.00 14366.47 10032.05 22988.01 00:07:06.903 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:06.903 Nvme2n3 : 1.03 8797.12 34.36 0.00 0.00 14347.57 9275.86 22887.19 00:07:06.903 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:06.903 Nvme3n1 : 1.03 8787.19 34.32 0.00 0.00 14332.30 8418.86 22786.36 00:07:06.903 [2024-11-17T04:12:52.630Z] =================================================================================================================== 00:07:06.903 [2024-11-17T04:12:52.630Z] Total : 52873.52 206.54 0.00 0.00 14390.58 5570.56 24097.08 00:07:07.165 00:07:07.165 real 0m1.837s 00:07:07.165 user 0m1.542s 00:07:07.165 sys 0m0.179s 00:07:07.165 04:12:52 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.165 ************************************ 00:07:07.165 END TEST bdev_write_zeroes 00:07:07.165 04:12:52 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:07.165 ************************************ 00:07:07.165 04:12:52 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:07.165 04:12:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:07.165 04:12:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.165 04:12:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.165 ************************************ 00:07:07.165 START TEST bdev_json_nonenclosed 00:07:07.165 ************************************ 00:07:07.165 04:12:52 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:07.165 [2024-11-17 04:12:52.744268] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:07.165 [2024-11-17 04:12:52.744410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72497 ] 00:07:07.425 [2024-11-17 04:12:52.901461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.425 [2024-11-17 04:12:52.921983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.425 [2024-11-17 04:12:52.922073] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:07.425 [2024-11-17 04:12:52.922093] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:07.425 [2024-11-17 04:12:52.922107] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.425 00:07:07.425 real 0m0.307s 00:07:07.425 user 0m0.122s 00:07:07.425 sys 0m0.082s 00:07:07.425 04:12:52 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.425 ************************************ 00:07:07.425 END TEST bdev_json_nonenclosed 00:07:07.425 ************************************ 00:07:07.425 04:12:52 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:07.425 04:12:53 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:07.425 04:12:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:07.425 04:12:53 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.425 04:12:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.425 ************************************ 00:07:07.425 START TEST bdev_json_nonarray 00:07:07.425 ************************************ 00:07:07.425 04:12:53 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:07.425 [2024-11-17 04:12:53.112447] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:07.425 [2024-11-17 04:12:53.112577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72518 ] 00:07:07.686 [2024-11-17 04:12:53.269878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.686 [2024-11-17 04:12:53.291676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.686 [2024-11-17 04:12:53.291777] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:07.686 [2024-11-17 04:12:53.291793] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:07.686 [2024-11-17 04:12:53.291815] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.686 00:07:07.686 real 0m0.306s 00:07:07.686 user 0m0.122s 00:07:07.686 sys 0m0.081s 00:07:07.686 04:12:53 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.686 ************************************ 00:07:07.686 END TEST bdev_json_nonarray 00:07:07.686 ************************************ 00:07:07.686 04:12:53 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:07.686 04:12:53 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:07.686 04:12:53 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:07.686 04:12:53 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:07.686 04:12:53 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:07.686 04:12:53 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:07.686 04:12:53 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:07.686 04:12:53 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:07.686 04:12:53 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:07.686 04:12:53 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:07.686 04:12:53 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:07.686 04:12:53 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:07.686 00:07:07.686 real 0m31.743s 00:07:07.686 user 0m50.339s 00:07:07.686 sys 0m5.241s 00:07:07.686 04:12:53 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.686 04:12:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.686 ************************************ 00:07:07.686 END TEST blockdev_nvme 00:07:07.686 ************************************ 00:07:07.946 04:12:53 -- spdk/autotest.sh@209 -- # uname -s 00:07:07.946 04:12:53 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:07.946 04:12:53 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:07.946 04:12:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.946 04:12:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.946 04:12:53 -- common/autotest_common.sh@10 -- # set +x 00:07:07.946 ************************************ 00:07:07.946 START TEST blockdev_nvme_gpt 00:07:07.946 ************************************ 00:07:07.946 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:07.947 * Looking for test storage... 00:07:07.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:07.947 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.947 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.947 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.947 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.947 04:12:53 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:07.947 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.947 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.947 --rc genhtml_branch_coverage=1 00:07:07.947 --rc genhtml_function_coverage=1 00:07:07.947 --rc genhtml_legend=1 00:07:07.947 --rc geninfo_all_blocks=1 00:07:07.947 --rc geninfo_unexecuted_blocks=1 00:07:07.947 00:07:07.947 ' 00:07:07.947 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.947 --rc genhtml_branch_coverage=1 00:07:07.947 --rc genhtml_function_coverage=1 00:07:07.947 --rc genhtml_legend=1 00:07:07.947 --rc geninfo_all_blocks=1 00:07:07.947 --rc geninfo_unexecuted_blocks=1 00:07:07.947 00:07:07.947 ' 00:07:07.947 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.947 --rc genhtml_branch_coverage=1 00:07:07.947 --rc genhtml_function_coverage=1 00:07:07.947 --rc genhtml_legend=1 00:07:07.947 --rc geninfo_all_blocks=1 00:07:07.947 --rc geninfo_unexecuted_blocks=1 00:07:07.947 00:07:07.947 ' 00:07:07.947 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.947 --rc genhtml_branch_coverage=1 00:07:07.947 --rc genhtml_function_coverage=1 00:07:07.947 --rc genhtml_legend=1 00:07:07.947 --rc geninfo_all_blocks=1 00:07:07.947 --rc geninfo_unexecuted_blocks=1 00:07:07.947 00:07:07.947 ' 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72591 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72591 00:07:07.947 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72591 ']' 00:07:07.947 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.947 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.947 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.947 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.947 04:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:07.947 04:12:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.208 [2024-11-17 04:12:53.723138] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:08.208 [2024-11-17 04:12:53.723308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72591 ] 00:07:08.208 [2024-11-17 04:12:53.898113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.208 [2024-11-17 04:12:53.918667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.151 04:12:54 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.151 04:12:54 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:09.151 04:12:54 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:09.151 04:12:54 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:09.151 04:12:54 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:09.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:09.484 Waiting for block devices as requested 00:07:09.484 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:09.484 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:09.745 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:09.745 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:15.031 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:15.031 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:15.031 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:15.031 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:15.031 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:15.031 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:15.031 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:15.031 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:15.031 BYT; 00:07:15.031 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:15.031 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:15.031 BYT; 00:07:15.031 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:15.031 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:15.031 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:15.032 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:15.032 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:15.032 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:15.032 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:15.032 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:15.032 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:15.032 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:15.032 04:13:00 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:15.032 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:15.032 04:13:00 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:15.964 The operation has completed successfully. 00:07:15.964 04:13:01 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:16.897 The operation has completed successfully. 00:07:16.897 04:13:02 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:17.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:17.721 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:17.721 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:17.721 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:17.721 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:17.982 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:17.982 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.982 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:17.982 [] 00:07:17.982 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.982 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:17.982 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:17.982 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:17.982 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:17.982 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:17.982 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.982 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.240 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.240 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:18.240 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.240 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.240 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.241 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:18.241 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:18.241 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.241 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.241 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.241 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:18.241 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.241 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.241 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.241 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:18.241 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.241 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.241 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.241 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:18.241 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:18.241 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:18.241 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.241 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.241 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.241 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:18.241 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:18.241 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "cfec08d3-7c24-4b47-be6e-6a839e9c34d2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cfec08d3-7c24-4b47-be6e-6a839e9c34d2",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "29fd0b54-7fbb-4680-9316-e071f7d8d8e1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "29fd0b54-7fbb-4680-9316-e071f7d8d8e1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5d53d7c5-18f0-4910-80f5-49633549a867"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5d53d7c5-18f0-4910-80f5-49633549a867",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "91ab6951-b5ac-480e-9e3d-2f556e3c45be"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "91ab6951-b5ac-480e-9e3d-2f556e3c45be",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e02f1d77-84f2-471d-894a-c5c0fdb2e2f6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e02f1d77-84f2-471d-894a-c5c0fdb2e2f6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:18.241 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:18.241 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:18.241 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:18.241 04:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 72591 00:07:18.241 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72591 ']' 00:07:18.241 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72591 00:07:18.241 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:18.242 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.242 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72591 00:07:18.499 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.499 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.499 killing process with pid 72591 00:07:18.499 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72591' 00:07:18.499 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72591 00:07:18.499 04:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72591 00:07:18.499 04:13:04 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:18.499 04:13:04 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:18.758 04:13:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:18.758 04:13:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.758 04:13:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.758 ************************************ 00:07:18.758 START TEST bdev_hello_world 00:07:18.758 ************************************ 00:07:18.758 04:13:04 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:18.758 [2024-11-17 04:13:04.292346] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:18.758 [2024-11-17 04:13:04.292472] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73208 ] 00:07:18.758 [2024-11-17 04:13:04.450401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.758 [2024-11-17 04:13:04.468393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.323 [2024-11-17 04:13:04.835299] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:19.323 [2024-11-17 04:13:04.835348] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:19.323 [2024-11-17 04:13:04.835364] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:19.323 [2024-11-17 04:13:04.837470] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:19.323 [2024-11-17 04:13:04.837854] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:19.323 [2024-11-17 04:13:04.837882] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:19.323 [2024-11-17 04:13:04.838092] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:19.323 00:07:19.324 [2024-11-17 04:13:04.838120] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:19.324 00:07:19.324 real 0m0.758s 00:07:19.324 user 0m0.508s 00:07:19.324 sys 0m0.146s 00:07:19.324 04:13:04 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.324 04:13:04 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:19.324 ************************************ 00:07:19.324 END TEST bdev_hello_world 00:07:19.324 ************************************ 00:07:19.324 04:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:19.324 04:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:19.324 04:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.324 04:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:19.324 ************************************ 00:07:19.324 START TEST bdev_bounds 00:07:19.324 ************************************ 00:07:19.324 04:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:19.324 04:13:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73236 00:07:19.324 04:13:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:19.324 Process bdevio pid: 73236 00:07:19.324 04:13:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73236' 00:07:19.324 04:13:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73236 00:07:19.324 04:13:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:19.324 04:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73236 ']' 00:07:19.324 04:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.324 04:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.324 04:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.324 04:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.324 04:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:19.581 [2024-11-17 04:13:05.095086] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:19.581 [2024-11-17 04:13:05.095206] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73236 ] 00:07:19.581 [2024-11-17 04:13:05.253339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.581 [2024-11-17 04:13:05.274910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.581 [2024-11-17 04:13:05.275325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.581 [2024-11-17 04:13:05.275439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.512 04:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.512 04:13:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:20.512 04:13:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:20.512 I/O targets: 00:07:20.512 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:20.512 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:20.512 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:20.512 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:20.512 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:20.512 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:20.512 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:20.512 00:07:20.512 00:07:20.512 CUnit - A unit testing framework for C - Version 2.1-3 00:07:20.512 http://cunit.sourceforge.net/ 00:07:20.512 00:07:20.512 00:07:20.512 Suite: bdevio tests on: Nvme3n1 00:07:20.512 Test: blockdev write read block ...passed 00:07:20.512 Test: blockdev write zeroes read block ...passed 00:07:20.512 Test: blockdev write zeroes read no split ...passed 00:07:20.512 Test: blockdev write zeroes read split ...passed 00:07:20.512 Test: blockdev write zeroes read split partial ...passed 00:07:20.512 Test: blockdev reset ...[2024-11-17 04:13:06.120508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:20.512 [2024-11-17 04:13:06.122188] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:20.512 passed 00:07:20.512 Test: blockdev write read 8 blocks ...passed 00:07:20.512 Test: blockdev write read size > 128k ...passed 00:07:20.512 Test: blockdev write read invalid size ...passed 00:07:20.512 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:20.512 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:20.512 Test: blockdev write read max offset ...passed 00:07:20.512 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:20.512 Test: blockdev writev readv 8 blocks ...passed 00:07:20.512 Test: blockdev writev readv 30 x 1block ...passed 00:07:20.512 Test: blockdev writev readv block ...passed 00:07:20.512 Test: blockdev writev readv size > 128k ...passed 00:07:20.512 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:20.512 Test: blockdev comparev and writev ...[2024-11-17 04:13:06.127086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ce40e000 len:0x1000 00:07:20.512 [2024-11-17 04:13:06.127194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:20.512 passed 00:07:20.512 Test: blockdev nvme passthru rw ...passed 00:07:20.512 Test: blockdev nvme passthru vendor specific ...passed 00:07:20.512 Test: blockdev nvme admin passthru ...[2024-11-17 04:13:06.127616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:20.512 [2024-11-17 04:13:06.127681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:20.512 passed 00:07:20.512 Test: blockdev copy ...passed 00:07:20.512 Suite: bdevio tests on: Nvme2n3 00:07:20.512 Test: blockdev write read block ...passed 00:07:20.512 Test: blockdev write zeroes read block ...passed 00:07:20.512 Test: blockdev write zeroes read no split ...passed 00:07:20.512 Test: blockdev write zeroes read split ...passed 00:07:20.512 Test: blockdev write zeroes read split partial ...passed 00:07:20.512 Test: blockdev reset ...[2024-11-17 04:13:06.220122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:20.512 [2024-11-17 04:13:06.222389] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:20.512 passed 00:07:20.512 Test: blockdev write read 8 blocks ...passed 00:07:20.512 Test: blockdev write read size > 128k ...passed 00:07:20.512 Test: blockdev write read invalid size ...passed 00:07:20.512 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:20.512 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:20.512 Test: blockdev write read max offset ...passed 00:07:20.512 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:20.512 Test: blockdev writev readv 8 blocks ...passed 00:07:20.512 Test: blockdev writev readv 30 x 1block ...passed 00:07:20.512 Test: blockdev writev readv block ...passed 00:07:20.512 Test: blockdev writev readv size > 128k ...passed 00:07:20.512 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:20.512 Test: blockdev comparev and writev ...[2024-11-17 04:13:06.227131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ce40a000 len:0x1000 00:07:20.512 [2024-11-17 04:13:06.227215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:20.512 passed 00:07:20.512 Test: blockdev nvme passthru rw ...passed 00:07:20.512 Test: blockdev nvme passthru vendor specific ...passed 00:07:20.512 Test: blockdev nvme admin passthru ...[2024-11-17 04:13:06.227716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:20.512 [2024-11-17 04:13:06.227782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:20.512 passed 00:07:20.512 Test: blockdev copy ...passed 00:07:20.512 Suite: bdevio tests on: Nvme2n2 00:07:20.512 Test: blockdev write read block ...passed 00:07:20.770 Test: blockdev write zeroes read block ...passed 00:07:20.770 Test: blockdev write zeroes read no split ...passed 00:07:20.770 Test: blockdev write zeroes read split ...passed 00:07:20.770 Test: blockdev write zeroes read split partial ...passed 00:07:20.770 Test: blockdev reset ...[2024-11-17 04:13:06.320318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:20.770 [2024-11-17 04:13:06.322368] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:20.770 passed 00:07:20.770 Test: blockdev write read 8 blocks ...passed 00:07:20.770 Test: blockdev write read size > 128k ...passed 00:07:20.770 Test: blockdev write read invalid size ...passed 00:07:20.770 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:20.770 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:20.770 Test: blockdev write read max offset ...passed 00:07:20.770 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:20.770 Test: blockdev writev readv 8 blocks ...passed 00:07:20.770 Test: blockdev writev readv 30 x 1block ...passed 00:07:20.770 Test: blockdev writev readv block ...passed 00:07:20.770 Test: blockdev writev readv size > 128k ...passed 00:07:20.770 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:20.770 Test: blockdev comparev and writev ...[2024-11-17 04:13:06.327238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1e05000 len:0x1000 00:07:20.770 [2024-11-17 04:13:06.327320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:20.770 passed 00:07:20.770 Test: blockdev nvme passthru rw ...passed 00:07:20.770 Test: blockdev nvme passthru vendor specific ...[2024-11-17 04:13:06.328003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:20.770 [2024-11-17 04:13:06.328062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:20.770 passed 00:07:20.770 Test: blockdev nvme admin passthru ...passed 00:07:20.770 Test: blockdev copy ...passed 00:07:20.770 Suite: bdevio tests on: Nvme2n1 00:07:20.770 Test: blockdev write read block ...passed 00:07:20.770 Test: blockdev write zeroes read block ...passed 00:07:20.770 Test: blockdev write zeroes read no split ...passed 00:07:20.770 Test: blockdev write zeroes read split ...passed 00:07:20.770 Test: blockdev write zeroes read split partial ...passed 00:07:20.770 Test: blockdev reset ...[2024-11-17 04:13:06.437974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:20.770 [2024-11-17 04:13:06.439751] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:20.770 passed 00:07:20.770 Test: blockdev write read 8 blocks ...passed 00:07:20.770 Test: blockdev write read size > 128k ...passed 00:07:20.770 Test: blockdev write read invalid size ...passed 00:07:20.770 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:20.770 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:20.770 Test: blockdev write read max offset ...passed 00:07:20.770 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:20.770 Test: blockdev writev readv 8 blocks ...passed 00:07:20.770 Test: blockdev writev readv 30 x 1block ...passed 00:07:20.770 Test: blockdev writev readv block ...passed 00:07:20.770 Test: blockdev writev readv size > 128k ...passed 00:07:20.770 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:20.770 Test: blockdev comparev and writev ...[2024-11-17 04:13:06.444392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6e02000 len:0x1000 00:07:20.770 [2024-11-17 04:13:06.444481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:20.770 passed 00:07:20.770 Test: blockdev nvme passthru rw ...passed 00:07:20.770 Test: blockdev nvme passthru vendor specific ...[2024-11-17 04:13:06.444974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:20.770 passed 00:07:20.770 Test: blockdev nvme admin passthru ...[2024-11-17 04:13:06.445037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:20.770 passed 00:07:20.770 Test: blockdev copy ...passed 00:07:20.770 Suite: bdevio tests on: Nvme1n1p2 00:07:20.770 Test: blockdev write read block ...passed 00:07:21.028 Test: blockdev write zeroes read block ...passed 00:07:21.028 Test: blockdev write zeroes read no split ...passed 00:07:21.028 Test: blockdev write zeroes read split ...passed 00:07:21.028 Test: blockdev write zeroes read split partial ...passed 00:07:21.028 Test: blockdev reset ...passed 00:07:21.028 Test: blockdev write read 8 blocks ...passed 00:07:21.028 Test: blockdev write read size > 128k ...passed 00:07:21.028 Test: blockdev write read invalid size ...passed 00:07:21.028 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:21.028 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:21.028 Test: blockdev write read max offset ...passed 00:07:21.028 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:21.028 Test: blockdev writev readv 8 blocks ...passed 00:07:21.028 Test: blockdev writev readv 30 x 1block ...passed 00:07:21.028 Test: blockdev writev readv block ...passed 00:07:21.028 Test: blockdev writev readv size > 128k ...passed 00:07:21.028 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:21.028 Test: blockdev comparev and writev ...passed 00:07:21.028 Test: blockdev nvme passthru rw ...passed 00:07:21.028 Test: blockdev nvme passthru vendor specific ...passed 00:07:21.028 Test: blockdev nvme admin passthru ...passed 00:07:21.028 Test: blockdev copy ...passed 00:07:21.028 Suite: bdevio tests on: Nvme1n1p1 00:07:21.028 Test: blockdev write read block ...passed 00:07:21.028 Test: blockdev write zeroes read block ...[2024-11-17 04:13:06.554994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:21.028 [2024-11-17 04:13:06.556487] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:21.028 [2024-11-17 04:13:06.561814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e2a3b000 len:0x1000 00:07:21.028 [2024-11-17 04:13:06.561899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:21.028 passed 00:07:21.028 Test: blockdev write zeroes read no split ...passed 00:07:21.028 Test: blockdev write zeroes read split ...passed 00:07:21.028 Test: blockdev write zeroes read split partial ...passed 00:07:21.028 Test: blockdev reset ...[2024-11-17 04:13:06.649047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:21.028 [2024-11-17 04:13:06.650600] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:21.028 passed 00:07:21.028 Test: blockdev write read 8 blocks ...passed 00:07:21.028 Test: blockdev write read size > 128k ...passed 00:07:21.028 Test: blockdev write read invalid size ...passed 00:07:21.028 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:21.028 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:21.028 Test: blockdev write read max offset ...passed 00:07:21.028 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:21.028 Test: blockdev writev readv 8 blocks ...passed 00:07:21.028 Test: blockdev writev readv 30 x 1block ...passed 00:07:21.028 Test: blockdev writev readv block ...passed 00:07:21.028 Test: blockdev writev readv size > 128k ...passed 00:07:21.028 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:21.028 Test: blockdev comparev and writev ...[2024-11-17 04:13:06.655000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2e2a37000 len:0x1000 00:07:21.028 [2024-11-17 04:13:06.655085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:21.028 passed 00:07:21.028 Test: blockdev nvme passthru rw ...passed 00:07:21.028 Test: blockdev nvme passthru vendor specific ...passed 00:07:21.028 Test: blockdev nvme admin passthru ...passed 00:07:21.028 Test: blockdev copy ...passed 00:07:21.028 Suite: bdevio tests on: Nvme0n1 00:07:21.028 Test: blockdev write read block ...passed 00:07:21.028 Test: blockdev write zeroes read block ...passed 00:07:21.028 Test: blockdev write zeroes read no split ...passed 00:07:21.028 Test: blockdev write zeroes read split ...passed 00:07:21.286 Test: blockdev write zeroes read split partial ...passed 00:07:21.286 Test: blockdev reset ...[2024-11-17 04:13:06.757907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:21.286 [2024-11-17 04:13:06.759434] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:21.286 passed 00:07:21.286 Test: blockdev write read 8 blocks ...passed 00:07:21.286 Test: blockdev write read size > 128k ...passed 00:07:21.286 Test: blockdev write read invalid size ...passed 00:07:21.286 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:21.286 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:21.286 Test: blockdev write read max offset ...passed 00:07:21.286 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:21.286 Test: blockdev writev readv 8 blocks ...passed 00:07:21.286 Test: blockdev writev readv 30 x 1block ...passed 00:07:21.286 Test: blockdev writev readv block ...passed 00:07:21.286 Test: blockdev writev readv size > 128k ...passed 00:07:21.286 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:21.286 Test: blockdev comparev and writev ...[2024-11-17 04:13:06.763356] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:21.286 separate metadata which is not supported yet. 00:07:21.286 passed 00:07:21.286 Test: blockdev nvme passthru rw ...passed 00:07:21.286 Test: blockdev nvme passthru vendor specific ...[2024-11-17 04:13:06.764074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:21.286 passed 00:07:21.286 Test: blockdev nvme admin passthru ...[2024-11-17 04:13:06.764141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:21.286 passed 00:07:21.286 Test: blockdev copy ...passed 00:07:21.286 00:07:21.286 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.286 suites 7 7 n/a 0 0 00:07:21.286 tests 161 161 161 0 0 00:07:21.286 asserts 1025 1025 1025 0 n/a 00:07:21.286 00:07:21.286 Elapsed time = 1.543 seconds 00:07:21.286 0 00:07:21.286 04:13:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73236 00:07:21.286 04:13:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73236 ']' 00:07:21.286 04:13:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73236 00:07:21.286 04:13:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:21.286 04:13:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.286 04:13:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73236 00:07:21.286 04:13:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.286 04:13:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.286 killing process with pid 73236 00:07:21.286 04:13:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73236' 00:07:21.286 04:13:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73236 00:07:21.286 04:13:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73236 00:07:22.657 04:13:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:22.657 00:07:22.657 real 0m2.930s 00:07:22.657 user 0m7.301s 00:07:22.657 sys 0m0.298s 00:07:22.657 04:13:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.657 ************************************ 00:07:22.657 END TEST bdev_bounds 00:07:22.657 ************************************ 00:07:22.657 04:13:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:22.657 04:13:07 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:22.657 04:13:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:22.657 04:13:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.657 04:13:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.657 ************************************ 00:07:22.657 START TEST bdev_nbd 00:07:22.657 ************************************ 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:22.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73295 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73295 /var/tmp/spdk-nbd.sock 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73295 ']' 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:22.657 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:22.657 [2024-11-17 04:13:08.062105] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:22.657 [2024-11-17 04:13:08.062199] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.657 [2024-11-17 04:13:08.210602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.657 [2024-11-17 04:13:08.228543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.222 04:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.222 04:13:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:23.222 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:23.222 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.222 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:23.222 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:23.222 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:23.222 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.222 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:23.222 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:23.222 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:23.222 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:23.222 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:23.222 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:23.222 04:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.480 1+0 records in 00:07:23.480 1+0 records out 00:07:23.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514404 s, 8.0 MB/s 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:23.480 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.737 1+0 records in 00:07:23.737 1+0 records out 00:07:23.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362176 s, 11.3 MB/s 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:23.737 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.995 1+0 records in 00:07:23.995 1+0 records out 00:07:23.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302376 s, 13.5 MB/s 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:23.995 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.253 1+0 records in 00:07:24.253 1+0 records out 00:07:24.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286008 s, 14.3 MB/s 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:24.253 04:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.511 1+0 records in 00:07:24.511 1+0 records out 00:07:24.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460365 s, 8.9 MB/s 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:24.511 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:24.768 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:24.768 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:24.768 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:24.768 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:24.768 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:24.768 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.769 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.769 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:24.769 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:24.769 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.769 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.769 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.769 1+0 records in 00:07:24.769 1+0 records out 00:07:24.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405185 s, 10.1 MB/s 00:07:24.769 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.769 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:24.769 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.769 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.769 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:24.769 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:24.769 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:24.769 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.027 1+0 records in 00:07:25.027 1+0 records out 00:07:25.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347841 s, 11.8 MB/s 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:25.027 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.285 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:25.285 { 00:07:25.285 "nbd_device": "/dev/nbd0", 00:07:25.285 "bdev_name": "Nvme0n1" 00:07:25.285 }, 00:07:25.285 { 00:07:25.285 "nbd_device": "/dev/nbd1", 00:07:25.285 "bdev_name": "Nvme1n1p1" 00:07:25.285 }, 00:07:25.285 { 00:07:25.285 "nbd_device": "/dev/nbd2", 00:07:25.285 "bdev_name": "Nvme1n1p2" 00:07:25.285 }, 00:07:25.285 { 00:07:25.285 "nbd_device": "/dev/nbd3", 00:07:25.285 "bdev_name": "Nvme2n1" 00:07:25.285 }, 00:07:25.285 { 00:07:25.285 "nbd_device": "/dev/nbd4", 00:07:25.285 "bdev_name": "Nvme2n2" 00:07:25.285 }, 00:07:25.285 { 00:07:25.285 "nbd_device": "/dev/nbd5", 00:07:25.285 "bdev_name": "Nvme2n3" 00:07:25.285 }, 00:07:25.285 { 00:07:25.285 "nbd_device": "/dev/nbd6", 00:07:25.285 "bdev_name": "Nvme3n1" 00:07:25.285 } 00:07:25.285 ]' 00:07:25.285 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:25.285 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:25.285 { 00:07:25.285 "nbd_device": "/dev/nbd0", 00:07:25.285 "bdev_name": "Nvme0n1" 00:07:25.285 }, 00:07:25.285 { 00:07:25.285 "nbd_device": "/dev/nbd1", 00:07:25.285 "bdev_name": "Nvme1n1p1" 00:07:25.285 }, 00:07:25.285 { 00:07:25.285 "nbd_device": "/dev/nbd2", 00:07:25.285 "bdev_name": "Nvme1n1p2" 00:07:25.285 }, 00:07:25.285 { 00:07:25.285 "nbd_device": "/dev/nbd3", 00:07:25.285 "bdev_name": "Nvme2n1" 00:07:25.285 }, 00:07:25.285 { 00:07:25.285 "nbd_device": "/dev/nbd4", 00:07:25.285 "bdev_name": "Nvme2n2" 00:07:25.285 }, 00:07:25.285 { 00:07:25.285 "nbd_device": "/dev/nbd5", 00:07:25.285 "bdev_name": "Nvme2n3" 00:07:25.285 }, 00:07:25.285 { 00:07:25.285 "nbd_device": "/dev/nbd6", 00:07:25.285 "bdev_name": "Nvme3n1" 00:07:25.285 } 00:07:25.285 ]' 00:07:25.285 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:25.285 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:25.285 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.285 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:25.285 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:25.285 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:25.285 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.285 04:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.543 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:25.801 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:25.801 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:25.801 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:25.801 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.801 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.801 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:25.801 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.801 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.801 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.801 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:26.059 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:26.059 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:26.059 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:26.059 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.059 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.059 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:26.059 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.059 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.059 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.059 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:26.317 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:26.317 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:26.317 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:26.317 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.317 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.317 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:26.317 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.317 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.317 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.317 04:13:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.576 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.834 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:26.835 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:26.835 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:26.835 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:26.835 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:26.835 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.835 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:26.835 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:26.835 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:26.835 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:26.835 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:26.835 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:26.835 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:26.835 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:27.096 /dev/nbd0 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.096 1+0 records in 00:07:27.096 1+0 records out 00:07:27.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574532 s, 7.1 MB/s 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:27.096 04:13:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:27.355 /dev/nbd1 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.355 1+0 records in 00:07:27.355 1+0 records out 00:07:27.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570746 s, 7.2 MB/s 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:27.355 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:27.613 /dev/nbd10 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.613 1+0 records in 00:07:27.613 1+0 records out 00:07:27.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367765 s, 11.1 MB/s 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:27.613 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:27.871 /dev/nbd11 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.871 1+0 records in 00:07:27.871 1+0 records out 00:07:27.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688963 s, 5.9 MB/s 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:27.871 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:27.872 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.872 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:27.872 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:28.131 /dev/nbd12 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:28.131 1+0 records in 00:07:28.131 1+0 records out 00:07:28.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442584 s, 9.3 MB/s 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:28.131 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:28.389 /dev/nbd13 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:28.389 1+0 records in 00:07:28.389 1+0 records out 00:07:28.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426119 s, 9.6 MB/s 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:28.389 04:13:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:28.389 /dev/nbd14 00:07:28.390 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:28.390 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:28.390 04:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:28.390 04:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:28.390 04:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:28.390 04:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:28.390 04:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:28.647 04:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:28.647 04:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:28.647 04:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:28.647 04:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:28.647 1+0 records in 00:07:28.647 1+0 records out 00:07:28.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422825 s, 9.7 MB/s 00:07:28.647 04:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.647 04:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:28.647 04:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.647 04:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:28.647 04:13:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:28.647 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.647 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:28.647 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.647 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.647 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.647 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:28.647 { 00:07:28.647 "nbd_device": "/dev/nbd0", 00:07:28.647 "bdev_name": "Nvme0n1" 00:07:28.647 }, 00:07:28.647 { 00:07:28.647 "nbd_device": "/dev/nbd1", 00:07:28.647 "bdev_name": "Nvme1n1p1" 00:07:28.647 }, 00:07:28.647 { 00:07:28.647 "nbd_device": "/dev/nbd10", 00:07:28.647 "bdev_name": "Nvme1n1p2" 00:07:28.647 }, 00:07:28.647 { 00:07:28.647 "nbd_device": "/dev/nbd11", 00:07:28.647 "bdev_name": "Nvme2n1" 00:07:28.647 }, 00:07:28.647 { 00:07:28.647 "nbd_device": "/dev/nbd12", 00:07:28.647 "bdev_name": "Nvme2n2" 00:07:28.647 }, 00:07:28.647 { 00:07:28.647 "nbd_device": "/dev/nbd13", 00:07:28.647 "bdev_name": "Nvme2n3" 00:07:28.647 }, 00:07:28.647 { 00:07:28.647 "nbd_device": "/dev/nbd14", 00:07:28.647 "bdev_name": "Nvme3n1" 00:07:28.647 } 00:07:28.648 ]' 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:28.648 { 00:07:28.648 "nbd_device": "/dev/nbd0", 00:07:28.648 "bdev_name": "Nvme0n1" 00:07:28.648 }, 00:07:28.648 { 00:07:28.648 "nbd_device": "/dev/nbd1", 00:07:28.648 "bdev_name": "Nvme1n1p1" 00:07:28.648 }, 00:07:28.648 { 00:07:28.648 "nbd_device": "/dev/nbd10", 00:07:28.648 "bdev_name": "Nvme1n1p2" 00:07:28.648 }, 00:07:28.648 { 00:07:28.648 "nbd_device": "/dev/nbd11", 00:07:28.648 "bdev_name": "Nvme2n1" 00:07:28.648 }, 00:07:28.648 { 00:07:28.648 "nbd_device": "/dev/nbd12", 00:07:28.648 "bdev_name": "Nvme2n2" 00:07:28.648 }, 00:07:28.648 { 00:07:28.648 "nbd_device": "/dev/nbd13", 00:07:28.648 "bdev_name": "Nvme2n3" 00:07:28.648 }, 00:07:28.648 { 00:07:28.648 "nbd_device": "/dev/nbd14", 00:07:28.648 "bdev_name": "Nvme3n1" 00:07:28.648 } 00:07:28.648 ]' 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:28.648 /dev/nbd1 00:07:28.648 /dev/nbd10 00:07:28.648 /dev/nbd11 00:07:28.648 /dev/nbd12 00:07:28.648 /dev/nbd13 00:07:28.648 /dev/nbd14' 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:28.648 /dev/nbd1 00:07:28.648 /dev/nbd10 00:07:28.648 /dev/nbd11 00:07:28.648 /dev/nbd12 00:07:28.648 /dev/nbd13 00:07:28.648 /dev/nbd14' 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:28.648 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:28.905 256+0 records in 00:07:28.905 256+0 records out 00:07:28.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00720225 s, 146 MB/s 00:07:28.905 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:28.905 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:28.905 256+0 records in 00:07:28.905 256+0 records out 00:07:28.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0680445 s, 15.4 MB/s 00:07:28.905 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:28.905 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:28.905 256+0 records in 00:07:28.905 256+0 records out 00:07:28.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.070896 s, 14.8 MB/s 00:07:28.905 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:28.905 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:28.905 256+0 records in 00:07:28.905 256+0 records out 00:07:28.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0688482 s, 15.2 MB/s 00:07:28.905 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:28.905 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:29.163 256+0 records in 00:07:29.163 256+0 records out 00:07:29.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0692511 s, 15.1 MB/s 00:07:29.163 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:29.164 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:29.164 256+0 records in 00:07:29.164 256+0 records out 00:07:29.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0686782 s, 15.3 MB/s 00:07:29.164 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:29.164 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:29.164 256+0 records in 00:07:29.164 256+0 records out 00:07:29.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0688764 s, 15.2 MB/s 00:07:29.164 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:29.164 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:29.164 256+0 records in 00:07:29.164 256+0 records out 00:07:29.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0688684 s, 15.2 MB/s 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.422 04:13:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:29.422 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:29.422 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:29.422 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:29.422 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.422 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.422 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:29.422 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:29.422 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.422 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.422 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:29.681 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:29.681 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:29.681 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:29.681 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.681 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.681 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:29.681 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:29.681 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.681 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.681 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:29.942 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:29.942 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:29.942 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:29.942 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.942 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.942 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:29.942 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:29.942 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.942 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.942 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:30.201 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:30.201 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:30.201 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:30.201 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.201 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.201 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:30.201 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:30.201 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.201 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.201 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:30.460 04:13:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:30.460 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:30.460 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:30.460 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.460 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.460 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:30.460 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:30.460 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.460 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.460 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.718 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:30.976 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:31.234 malloc_lvol_verify 00:07:31.234 04:13:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:31.493 49f59f09-c0fe-4a52-8df5-cc9490450f87 00:07:31.493 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:31.750 04bdaf4c-99d6-4595-b46e-a1997d02e436 00:07:31.751 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:32.008 /dev/nbd0 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:32.008 mke2fs 1.47.0 (5-Feb-2023) 00:07:32.008 Discarding device blocks: 0/4096 done 00:07:32.008 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:32.008 00:07:32.008 Allocating group tables: 0/1 done 00:07:32.008 Writing inode tables: 0/1 done 00:07:32.008 Creating journal (1024 blocks): done 00:07:32.008 Writing superblocks and filesystem accounting information: 0/1 done 00:07:32.008 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73295 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73295 ']' 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73295 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.008 04:13:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73295 00:07:32.266 04:13:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.266 04:13:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.266 killing process with pid 73295 00:07:32.266 04:13:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73295' 00:07:32.266 04:13:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73295 00:07:32.266 04:13:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73295 00:07:32.266 04:13:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:32.266 00:07:32.266 real 0m9.910s 00:07:32.266 user 0m14.509s 00:07:32.266 sys 0m3.423s 00:07:32.266 04:13:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.266 04:13:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:32.266 ************************************ 00:07:32.266 END TEST bdev_nbd 00:07:32.266 ************************************ 00:07:32.266 04:13:17 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:32.266 04:13:17 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:32.266 04:13:17 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:32.266 skipping fio tests on NVMe due to multi-ns failures. 00:07:32.266 04:13:17 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:32.266 04:13:17 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:32.266 04:13:17 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:32.266 04:13:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:32.266 04:13:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.266 04:13:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.266 ************************************ 00:07:32.266 START TEST bdev_verify 00:07:32.266 ************************************ 00:07:32.266 04:13:17 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:32.577 [2024-11-17 04:13:18.023468] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:32.577 [2024-11-17 04:13:18.023586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73691 ] 00:07:32.577 [2024-11-17 04:13:18.183016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.577 [2024-11-17 04:13:18.203559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.577 [2024-11-17 04:13:18.203597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.152 Running I/O for 5 seconds... 00:07:35.111 19008.00 IOPS, 74.25 MiB/s [2024-11-17T04:13:22.217Z] 20128.00 IOPS, 78.62 MiB/s [2024-11-17T04:13:23.157Z] 21824.00 IOPS, 85.25 MiB/s [2024-11-17T04:13:23.723Z] 21680.00 IOPS, 84.69 MiB/s 00:07:37.996 Latency(us) 00:07:37.996 [2024-11-17T04:13:23.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.996 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:37.996 Verification LBA range: start 0x0 length 0xbd0bd 00:07:37.996 Nvme0n1 : 5.06 1441.80 5.63 0.00 0.00 88469.26 15627.82 83482.78 00:07:37.996 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:37.996 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:37.996 Nvme0n1 : 5.03 1500.54 5.86 0.00 0.00 85001.32 12754.31 84289.38 00:07:37.996 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:37.996 Verification LBA range: start 0x0 length 0x4ff80 00:07:37.996 Nvme1n1p1 : 5.06 1441.27 5.63 0.00 0.00 88303.74 18148.43 77030.01 00:07:37.996 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:37.996 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:37.996 Nvme1n1p1 : 5.06 1504.39 5.88 0.00 0.00 84526.35 7007.31 77030.01 00:07:37.996 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:37.996 Verification LBA range: start 0x0 length 0x4ff7f 00:07:37.996 Nvme1n1p2 : 5.07 1440.32 5.63 0.00 0.00 88160.48 19358.33 78643.20 00:07:37.996 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:37.996 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:37.996 Nvme1n1p2 : 5.08 1512.52 5.91 0.00 0.00 84088.51 10637.00 76223.41 00:07:37.996 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:37.996 Verification LBA range: start 0x0 length 0x80000 00:07:37.996 Nvme2n1 : 5.07 1439.89 5.62 0.00 0.00 87990.38 21374.82 81062.99 00:07:37.996 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:37.996 Verification LBA range: start 0x80000 length 0x80000 00:07:37.996 Nvme2n1 : 5.08 1512.02 5.91 0.00 0.00 83921.87 11191.53 76626.71 00:07:37.996 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:37.996 Verification LBA range: start 0x0 length 0x80000 00:07:37.996 Nvme2n2 : 5.08 1449.28 5.66 0.00 0.00 87393.61 3554.07 79449.80 00:07:37.996 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:37.996 Verification LBA range: start 0x80000 length 0x80000 00:07:37.996 Nvme2n2 : 5.08 1511.26 5.90 0.00 0.00 83741.38 12754.31 78239.90 00:07:37.996 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:37.996 Verification LBA range: start 0x0 length 0x80000 00:07:37.996 Nvme2n3 : 5.08 1448.57 5.66 0.00 0.00 87273.19 4763.96 77433.30 00:07:37.996 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:37.996 Verification LBA range: start 0x80000 length 0x80000 00:07:37.996 Nvme2n3 : 5.08 1510.58 5.90 0.00 0.00 83590.12 14014.62 79046.50 00:07:37.996 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:37.996 Verification LBA range: start 0x0 length 0x20000 00:07:37.996 Nvme3n1 : 5.09 1458.90 5.70 0.00 0.00 86555.32 4285.05 79853.10 00:07:37.996 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:37.996 Verification LBA range: start 0x20000 length 0x20000 00:07:37.996 Nvme3n1 : 5.09 1510.18 5.90 0.00 0.00 83456.49 9326.28 79853.10 00:07:37.996 [2024-11-17T04:13:23.723Z] =================================================================================================================== 00:07:37.996 [2024-11-17T04:13:23.723Z] Total : 20681.50 80.79 0.00 0.00 85848.09 3554.07 84289.38 00:07:39.381 00:07:39.381 real 0m6.776s 00:07:39.381 user 0m12.768s 00:07:39.381 sys 0m0.196s 00:07:39.381 04:13:24 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.381 04:13:24 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:39.381 ************************************ 00:07:39.381 END TEST bdev_verify 00:07:39.381 ************************************ 00:07:39.382 04:13:24 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:39.382 04:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:39.382 04:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.382 04:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.382 ************************************ 00:07:39.382 START TEST bdev_verify_big_io 00:07:39.382 ************************************ 00:07:39.382 04:13:24 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:39.382 [2024-11-17 04:13:24.866929] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:39.382 [2024-11-17 04:13:24.867042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73789 ] 00:07:39.382 [2024-11-17 04:13:25.023943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:39.382 [2024-11-17 04:13:25.044131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.382 [2024-11-17 04:13:25.044230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.952 Running I/O for 5 seconds... 00:07:45.791 1036.00 IOPS, 64.75 MiB/s [2024-11-17T04:13:31.777Z] 2459.50 IOPS, 153.72 MiB/s [2024-11-17T04:13:31.777Z] 3103.00 IOPS, 193.94 MiB/s 00:07:46.050 Latency(us) 00:07:46.050 [2024-11-17T04:13:31.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.050 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.050 Verification LBA range: start 0x0 length 0xbd0b 00:07:46.050 Nvme0n1 : 5.68 97.43 6.09 0.00 0.00 1263212.64 21677.29 1897115.96 00:07:46.050 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.050 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:46.050 Nvme0n1 : 5.90 106.73 6.67 0.00 0.00 1125828.34 12653.49 1342177.28 00:07:46.050 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.050 Verification LBA range: start 0x0 length 0x4ff8 00:07:46.050 Nvme1n1p1 : 5.90 97.56 6.10 0.00 0.00 1211124.18 91952.05 1716438.25 00:07:46.050 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.050 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:46.050 Nvme1n1p1 : 5.90 100.44 6.28 0.00 0.00 1158183.07 85499.27 1832588.21 00:07:46.050 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.050 Verification LBA range: start 0x0 length 0x4ff7 00:07:46.050 Nvme1n1p2 : 6.11 70.71 4.42 0.00 0.00 1606489.01 120989.54 2193943.63 00:07:46.050 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.050 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:46.050 Nvme1n1p2 : 5.91 104.28 6.52 0.00 0.00 1093755.25 110503.78 1858399.31 00:07:46.050 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.050 Verification LBA range: start 0x0 length 0x8000 00:07:46.050 Nvme2n1 : 5.91 113.24 7.08 0.00 0.00 985845.31 122602.73 1187310.67 00:07:46.050 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.050 Verification LBA range: start 0x8000 length 0x8000 00:07:46.050 Nvme2n1 : 6.06 108.62 6.79 0.00 0.00 1014845.72 87919.06 1884210.41 00:07:46.050 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.050 Verification LBA range: start 0x0 length 0x8000 00:07:46.050 Nvme2n2 : 6.06 122.58 7.66 0.00 0.00 886717.07 52025.50 1213121.77 00:07:46.050 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.050 Verification LBA range: start 0x8000 length 0x8000 00:07:46.050 Nvme2n2 : 6.15 117.23 7.33 0.00 0.00 918854.71 51420.55 1910021.51 00:07:46.050 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.050 Verification LBA range: start 0x0 length 0x8000 00:07:46.050 Nvme2n3 : 6.11 130.58 8.16 0.00 0.00 807329.48 47387.57 1238932.87 00:07:46.050 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.050 Verification LBA range: start 0x8000 length 0x8000 00:07:46.050 Nvme2n3 : 6.15 121.25 7.58 0.00 0.00 861098.44 38918.30 1948738.17 00:07:46.050 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.050 Verification LBA range: start 0x0 length 0x2000 00:07:46.050 Nvme3n1 : 6.18 149.23 9.33 0.00 0.00 685754.89 1625.80 1277649.53 00:07:46.050 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.050 Verification LBA range: start 0x2000 length 0x2000 00:07:46.050 Nvme3n1 : 6.21 146.76 9.17 0.00 0.00 691855.27 696.32 1974549.27 00:07:46.050 [2024-11-17T04:13:31.777Z] =================================================================================================================== 00:07:46.050 [2024-11-17T04:13:31.777Z] Total : 1586.64 99.17 0.00 0.00 978000.36 696.32 2193943.63 00:07:46.991 00:07:46.991 real 0m7.718s 00:07:46.991 user 0m14.706s 00:07:46.991 sys 0m0.220s 00:07:46.991 04:13:32 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.991 04:13:32 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:46.991 ************************************ 00:07:46.991 END TEST bdev_verify_big_io 00:07:46.991 ************************************ 00:07:46.991 04:13:32 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.991 04:13:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:46.991 04:13:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.991 04:13:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:46.991 ************************************ 00:07:46.991 START TEST bdev_write_zeroes 00:07:46.991 ************************************ 00:07:46.991 04:13:32 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.991 [2024-11-17 04:13:32.649147] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:46.991 [2024-11-17 04:13:32.649272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73887 ] 00:07:47.251 [2024-11-17 04:13:32.804042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.251 [2024-11-17 04:13:32.825147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.511 Running I/O for 1 seconds... 00:07:48.892 47828.00 IOPS, 186.83 MiB/s 00:07:48.892 Latency(us) 00:07:48.892 [2024-11-17T04:13:34.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.892 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:48.892 Nvme0n1 : 1.03 6570.29 25.67 0.00 0.00 19431.29 6856.07 158899.59 00:07:48.892 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:48.892 Nvme1n1p1 : 1.03 6959.17 27.18 0.00 0.00 18325.54 10838.65 133895.09 00:07:48.892 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:48.892 Nvme1n1p2 : 1.03 6826.53 26.67 0.00 0.00 18577.27 10737.82 141961.06 00:07:48.892 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:48.892 Nvme2n1 : 1.03 6818.86 26.64 0.00 0.00 18553.75 11645.24 140347.86 00:07:48.892 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:48.892 Nvme2n2 : 1.03 6811.23 26.61 0.00 0.00 18520.18 11695.66 140347.86 00:07:48.892 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:48.892 Nvme2n3 : 1.03 6803.56 26.58 0.00 0.00 18488.17 10586.58 140347.86 00:07:48.892 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:48.892 Nvme3n1 : 1.04 6795.96 26.55 0.00 0.00 18463.18 9275.86 134701.69 00:07:48.892 [2024-11-17T04:13:34.619Z] =================================================================================================================== 00:07:48.892 [2024-11-17T04:13:34.619Z] Total : 47585.61 185.88 0.00 0.00 18617.37 6856.07 158899.59 00:07:48.892 00:07:48.892 real 0m1.858s 00:07:48.892 user 0m1.572s 00:07:48.892 sys 0m0.175s 00:07:48.892 04:13:34 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.892 04:13:34 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:48.892 ************************************ 00:07:48.892 END TEST bdev_write_zeroes 00:07:48.892 ************************************ 00:07:48.892 04:13:34 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:48.892 04:13:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:48.892 04:13:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.892 04:13:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.892 ************************************ 00:07:48.892 START TEST bdev_json_nonenclosed 00:07:48.892 ************************************ 00:07:48.892 04:13:34 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:48.892 [2024-11-17 04:13:34.560964] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:48.892 [2024-11-17 04:13:34.561081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73929 ] 00:07:49.151 [2024-11-17 04:13:34.720205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.151 [2024-11-17 04:13:34.739728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.151 [2024-11-17 04:13:34.739805] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:49.151 [2024-11-17 04:13:34.739823] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:49.151 [2024-11-17 04:13:34.739834] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.151 00:07:49.151 real 0m0.304s 00:07:49.151 user 0m0.114s 00:07:49.151 sys 0m0.086s 00:07:49.151 ************************************ 00:07:49.151 04:13:34 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.151 04:13:34 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:49.151 END TEST bdev_json_nonenclosed 00:07:49.151 ************************************ 00:07:49.151 04:13:34 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:49.151 04:13:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:49.151 04:13:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.151 04:13:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.151 ************************************ 00:07:49.151 START TEST bdev_json_nonarray 00:07:49.151 ************************************ 00:07:49.151 04:13:34 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:49.411 [2024-11-17 04:13:34.907669] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:49.411 [2024-11-17 04:13:34.907782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73955 ] 00:07:49.411 [2024-11-17 04:13:35.061827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.411 [2024-11-17 04:13:35.081770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.411 [2024-11-17 04:13:35.081862] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:49.411 [2024-11-17 04:13:35.081878] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:49.411 [2024-11-17 04:13:35.081889] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.671 00:07:49.671 real 0m0.301s 00:07:49.671 user 0m0.109s 00:07:49.671 sys 0m0.088s 00:07:49.671 04:13:35 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.671 04:13:35 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:49.671 ************************************ 00:07:49.671 END TEST bdev_json_nonarray 00:07:49.671 ************************************ 00:07:49.671 04:13:35 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:49.671 04:13:35 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:49.671 04:13:35 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:49.671 04:13:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.671 04:13:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.671 04:13:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.671 ************************************ 00:07:49.671 START TEST bdev_gpt_uuid 00:07:49.671 ************************************ 00:07:49.671 04:13:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:49.671 04:13:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:49.671 04:13:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:49.671 04:13:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73975 00:07:49.671 04:13:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:49.671 04:13:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:49.671 04:13:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 73975 00:07:49.671 04:13:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 73975 ']' 00:07:49.671 04:13:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.671 04:13:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.672 04:13:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.672 04:13:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.672 04:13:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:49.672 [2024-11-17 04:13:35.278838] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:49.672 [2024-11-17 04:13:35.278988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73975 ] 00:07:49.930 [2024-11-17 04:13:35.435464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.930 [2024-11-17 04:13:35.456569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.527 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.527 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:50.527 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:50.527 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.527 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:50.789 Some configs were skipped because the RPC state that can call them passed over. 00:07:50.789 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.789 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:50.789 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.789 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:50.789 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.789 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:50.789 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.789 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:50.789 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.789 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:50.789 { 00:07:50.789 "name": "Nvme1n1p1", 00:07:50.789 "aliases": [ 00:07:50.789 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:50.789 ], 00:07:50.789 "product_name": "GPT Disk", 00:07:50.789 "block_size": 4096, 00:07:50.789 "num_blocks": 655104, 00:07:50.789 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:50.789 "assigned_rate_limits": { 00:07:50.789 "rw_ios_per_sec": 0, 00:07:50.789 "rw_mbytes_per_sec": 0, 00:07:50.789 "r_mbytes_per_sec": 0, 00:07:50.789 "w_mbytes_per_sec": 0 00:07:50.789 }, 00:07:50.789 "claimed": false, 00:07:50.789 "zoned": false, 00:07:50.789 "supported_io_types": { 00:07:50.789 "read": true, 00:07:50.789 "write": true, 00:07:50.789 "unmap": true, 00:07:50.789 "flush": true, 00:07:50.789 "reset": true, 00:07:50.789 "nvme_admin": false, 00:07:50.789 "nvme_io": false, 00:07:50.789 "nvme_io_md": false, 00:07:50.789 "write_zeroes": true, 00:07:50.789 "zcopy": false, 00:07:50.789 "get_zone_info": false, 00:07:50.789 "zone_management": false, 00:07:50.789 "zone_append": false, 00:07:50.789 "compare": true, 00:07:50.789 "compare_and_write": false, 00:07:50.789 "abort": true, 00:07:50.789 "seek_hole": false, 00:07:50.789 "seek_data": false, 00:07:50.789 "copy": true, 00:07:50.789 "nvme_iov_md": false 00:07:50.789 }, 00:07:50.789 "driver_specific": { 00:07:50.789 "gpt": { 00:07:50.789 "base_bdev": "Nvme1n1", 00:07:50.789 "offset_blocks": 256, 00:07:50.789 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:50.789 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:50.789 "partition_name": "SPDK_TEST_first" 00:07:50.789 } 00:07:50.789 } 00:07:50.789 } 00:07:50.789 ]' 00:07:50.789 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:50.789 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:50.789 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:51.075 { 00:07:51.075 "name": "Nvme1n1p2", 00:07:51.075 "aliases": [ 00:07:51.075 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:51.075 ], 00:07:51.075 "product_name": "GPT Disk", 00:07:51.075 "block_size": 4096, 00:07:51.075 "num_blocks": 655103, 00:07:51.075 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:51.075 "assigned_rate_limits": { 00:07:51.075 "rw_ios_per_sec": 0, 00:07:51.075 "rw_mbytes_per_sec": 0, 00:07:51.075 "r_mbytes_per_sec": 0, 00:07:51.075 "w_mbytes_per_sec": 0 00:07:51.075 }, 00:07:51.075 "claimed": false, 00:07:51.075 "zoned": false, 00:07:51.075 "supported_io_types": { 00:07:51.075 "read": true, 00:07:51.075 "write": true, 00:07:51.075 "unmap": true, 00:07:51.075 "flush": true, 00:07:51.075 "reset": true, 00:07:51.075 "nvme_admin": false, 00:07:51.075 "nvme_io": false, 00:07:51.075 "nvme_io_md": false, 00:07:51.075 "write_zeroes": true, 00:07:51.075 "zcopy": false, 00:07:51.075 "get_zone_info": false, 00:07:51.075 "zone_management": false, 00:07:51.075 "zone_append": false, 00:07:51.075 "compare": true, 00:07:51.075 "compare_and_write": false, 00:07:51.075 "abort": true, 00:07:51.075 "seek_hole": false, 00:07:51.075 "seek_data": false, 00:07:51.075 "copy": true, 00:07:51.075 "nvme_iov_md": false 00:07:51.075 }, 00:07:51.075 "driver_specific": { 00:07:51.075 "gpt": { 00:07:51.075 "base_bdev": "Nvme1n1", 00:07:51.075 "offset_blocks": 655360, 00:07:51.075 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:51.075 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:51.075 "partition_name": "SPDK_TEST_second" 00:07:51.075 } 00:07:51.075 } 00:07:51.075 } 00:07:51.075 ]' 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 73975 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 73975 ']' 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 73975 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73975 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.075 killing process with pid 73975 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73975' 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 73975 00:07:51.075 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 73975 00:07:51.350 00:07:51.351 real 0m1.792s 00:07:51.351 user 0m1.898s 00:07:51.351 sys 0m0.417s 00:07:51.351 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.351 ************************************ 00:07:51.351 END TEST bdev_gpt_uuid 00:07:51.351 ************************************ 00:07:51.351 04:13:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:51.351 04:13:37 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:51.351 04:13:37 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:51.351 04:13:37 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:51.351 04:13:37 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:51.351 04:13:37 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:51.351 04:13:37 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:51.351 04:13:37 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:51.351 04:13:37 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:51.351 04:13:37 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:51.659 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:51.921 Waiting for block devices as requested 00:07:51.921 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:51.921 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:52.182 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:52.182 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:57.473 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:57.473 04:13:42 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:57.473 04:13:42 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:57.473 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:57.473 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:57.473 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:57.473 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:57.473 04:13:43 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:57.473 00:07:57.473 real 0m49.724s 00:07:57.473 user 1m5.204s 00:07:57.473 sys 0m7.634s 00:07:57.473 04:13:43 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.473 ************************************ 00:07:57.473 END TEST blockdev_nvme_gpt 00:07:57.473 ************************************ 00:07:57.473 04:13:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.733 04:13:43 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:57.733 04:13:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.733 04:13:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.733 04:13:43 -- common/autotest_common.sh@10 -- # set +x 00:07:57.733 ************************************ 00:07:57.733 START TEST nvme 00:07:57.733 ************************************ 00:07:57.733 04:13:43 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:57.733 * Looking for test storage... 00:07:57.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:57.733 04:13:43 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:57.733 04:13:43 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:57.733 04:13:43 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:57.733 04:13:43 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:57.733 04:13:43 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.733 04:13:43 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.733 04:13:43 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.733 04:13:43 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.733 04:13:43 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.733 04:13:43 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.733 04:13:43 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.733 04:13:43 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.733 04:13:43 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.733 04:13:43 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.733 04:13:43 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.733 04:13:43 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:57.733 04:13:43 nvme -- scripts/common.sh@345 -- # : 1 00:07:57.733 04:13:43 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.733 04:13:43 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.733 04:13:43 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:57.733 04:13:43 nvme -- scripts/common.sh@353 -- # local d=1 00:07:57.733 04:13:43 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.733 04:13:43 nvme -- scripts/common.sh@355 -- # echo 1 00:07:57.733 04:13:43 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.733 04:13:43 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:57.733 04:13:43 nvme -- scripts/common.sh@353 -- # local d=2 00:07:57.733 04:13:43 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.733 04:13:43 nvme -- scripts/common.sh@355 -- # echo 2 00:07:57.733 04:13:43 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.733 04:13:43 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.733 04:13:43 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.734 04:13:43 nvme -- scripts/common.sh@368 -- # return 0 00:07:57.734 04:13:43 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.734 04:13:43 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:57.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.734 --rc genhtml_branch_coverage=1 00:07:57.734 --rc genhtml_function_coverage=1 00:07:57.734 --rc genhtml_legend=1 00:07:57.734 --rc geninfo_all_blocks=1 00:07:57.734 --rc geninfo_unexecuted_blocks=1 00:07:57.734 00:07:57.734 ' 00:07:57.734 04:13:43 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:57.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.734 --rc genhtml_branch_coverage=1 00:07:57.734 --rc genhtml_function_coverage=1 00:07:57.734 --rc genhtml_legend=1 00:07:57.734 --rc geninfo_all_blocks=1 00:07:57.734 --rc geninfo_unexecuted_blocks=1 00:07:57.734 00:07:57.734 ' 00:07:57.734 04:13:43 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:57.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.734 --rc genhtml_branch_coverage=1 00:07:57.734 --rc genhtml_function_coverage=1 00:07:57.734 --rc genhtml_legend=1 00:07:57.734 --rc geninfo_all_blocks=1 00:07:57.734 --rc geninfo_unexecuted_blocks=1 00:07:57.734 00:07:57.734 ' 00:07:57.734 04:13:43 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:57.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.734 --rc genhtml_branch_coverage=1 00:07:57.734 --rc genhtml_function_coverage=1 00:07:57.734 --rc genhtml_legend=1 00:07:57.734 --rc geninfo_all_blocks=1 00:07:57.734 --rc geninfo_unexecuted_blocks=1 00:07:57.734 00:07:57.734 ' 00:07:57.734 04:13:43 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:58.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:58.877 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:58.877 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:58.877 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:58.877 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:58.877 04:13:44 nvme -- nvme/nvme.sh@79 -- # uname 00:07:58.877 04:13:44 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:58.877 04:13:44 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:58.877 04:13:44 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:58.877 04:13:44 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:58.877 04:13:44 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:58.877 04:13:44 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:58.877 Waiting for stub to ready for secondary processes... 00:07:58.877 04:13:44 nvme -- common/autotest_common.sh@1075 -- # stubpid=74601 00:07:58.877 04:13:44 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:58.877 04:13:44 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:58.877 04:13:44 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74601 ]] 00:07:58.877 04:13:44 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:58.877 04:13:44 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:59.138 [2024-11-17 04:13:44.615081] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:59.138 [2024-11-17 04:13:44.615447] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:00.080 [2024-11-17 04:13:45.476712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.080 [2024-11-17 04:13:45.490569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.080 [2024-11-17 04:13:45.490889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.080 [2024-11-17 04:13:45.490929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.080 [2024-11-17 04:13:45.502111] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:00.080 [2024-11-17 04:13:45.502146] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:00.080 [2024-11-17 04:13:45.515012] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:00.080 [2024-11-17 04:13:45.515161] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:00.080 [2024-11-17 04:13:45.516345] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:00.080 [2024-11-17 04:13:45.516534] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:00.080 [2024-11-17 04:13:45.516586] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:00.080 [2024-11-17 04:13:45.517726] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:00.080 [2024-11-17 04:13:45.517891] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:00.080 [2024-11-17 04:13:45.517946] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:00.080 [2024-11-17 04:13:45.519629] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:00.080 [2024-11-17 04:13:45.519794] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:00.080 [2024-11-17 04:13:45.519853] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:00.080 [2024-11-17 04:13:45.519926] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:00.080 [2024-11-17 04:13:45.520058] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:00.080 04:13:45 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:00.080 04:13:45 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:00.080 done. 00:08:00.080 04:13:45 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:00.080 04:13:45 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:00.080 04:13:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.080 04:13:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.080 ************************************ 00:08:00.081 START TEST nvme_reset 00:08:00.081 ************************************ 00:08:00.081 04:13:45 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:00.081 Initializing NVMe Controllers 00:08:00.081 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:00.081 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:00.081 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:00.081 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:00.081 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:00.081 00:08:00.342 ************************************ 00:08:00.342 END TEST nvme_reset 00:08:00.342 ************************************ 00:08:00.342 real 0m0.207s 00:08:00.342 user 0m0.069s 00:08:00.342 sys 0m0.090s 00:08:00.342 04:13:45 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.342 04:13:45 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:00.342 04:13:45 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:00.342 04:13:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.342 04:13:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.342 04:13:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.342 ************************************ 00:08:00.342 START TEST nvme_identify 00:08:00.342 ************************************ 00:08:00.342 04:13:45 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:00.342 04:13:45 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:00.342 04:13:45 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:00.342 04:13:45 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:00.342 04:13:45 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:00.342 04:13:45 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:00.342 04:13:45 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:00.342 04:13:45 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:00.342 04:13:45 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:00.342 04:13:45 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:00.342 04:13:45 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:00.342 04:13:45 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:00.342 04:13:45 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:00.606 ===================================================== 00:08:00.606 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.606 ===================================================== 00:08:00.606 Controller Capabilities/Features 00:08:00.606 ================================ 00:08:00.606 Vendor ID: 1b36 00:08:00.606 Subsystem Vendor ID: 1af4 00:08:00.606 Serial Number: 12343 00:08:00.606 Model Number: QEMU NVMe Ctrl 00:08:00.606 Firmware Version: 8.0.0 00:08:00.606 Recommended Arb Burst: 6 00:08:00.606 IEEE OUI Identifier: 00 54 52 00:08:00.606 Multi-path I/O 00:08:00.606 May have multiple subsystem ports: No 00:08:00.606 May have multiple controllers: Yes 00:08:00.606 Associated with SR-IOV VF: No 00:08:00.606 Max Data Transfer Size: 524288 00:08:00.606 Max Number of Namespaces: 256 00:08:00.606 Max Number of I/O Queues: 64 00:08:00.606 NVMe Specification Version (VS): 1.4 00:08:00.606 NVMe Specification Version (Identify): 1.4 00:08:00.606 Maximum Queue Entries: 2048 00:08:00.606 Contiguous Queues Required: Yes 00:08:00.606 Arbitration Mechanisms Supported 00:08:00.606 Weighted Round Robin: Not Supported 00:08:00.606 Vendor Specific: Not Supported 00:08:00.606 Reset Timeout: 7500 ms 00:08:00.606 Doorbell Stride: 4 bytes 00:08:00.606 NVM Subsystem Reset: Not Supported 00:08:00.606 Command Sets Supported 00:08:00.607 NVM Command Set: Supported 00:08:00.607 Boot Partition: Not Supported 00:08:00.607 Memory Page Size Minimum: 4096 bytes 00:08:00.607 Memory Page Size Maximum: 65536 bytes 00:08:00.607 Persistent Memory Region: Not Supported 00:08:00.607 Optional Asynchronous Events Supported 00:08:00.607 Namespace Attribute Notices: Supported 00:08:00.607 Firmware Activation Notices: Not Supported 00:08:00.607 ANA Change Notices: Not Supported 00:08:00.607 PLE Aggregate Log Change Notices: Not Supported 00:08:00.607 LBA Status Info Alert Notices: Not Supported 00:08:00.607 EGE Aggregate Log Change Notices: Not Supported 00:08:00.607 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.607 Zone Descriptor Change Notices: Not Supported 00:08:00.607 Discovery Log Change Notices: Not Supported 00:08:00.607 Controller Attributes 00:08:00.607 128-bit Host Identifier: Not Supported 00:08:00.607 Non-Operational Permissive Mode: Not Supported 00:08:00.607 NVM Sets: Not Supported 00:08:00.607 Read Recovery Levels: Not Supported 00:08:00.607 Endurance Groups: Supported 00:08:00.607 Predictable Latency Mode: Not Supported 00:08:00.607 Traffic Based Keep ALive: Not Supported 00:08:00.607 Namespace Granularity: Not Supported 00:08:00.607 SQ Associations: Not Supported 00:08:00.607 UUID List: Not Supported 00:08:00.607 Multi-Domain Subsystem: Not Supported 00:08:00.607 Fixed Capacity Management: Not Supported 00:08:00.607 Variable Capacity Management: Not Supported 00:08:00.607 Delete Endurance Group: Not Supported 00:08:00.607 Delete NVM Set: Not Supported 00:08:00.607 Extended LBA Formats Supported: Supported 00:08:00.607 Flexible Data Placement Supported: Supported 00:08:00.607 00:08:00.607 Controller Memory Buffer Support 00:08:00.607 ================================ 00:08:00.607 Supported: No 00:08:00.607 00:08:00.607 Persistent Memory Region Support 00:08:00.607 ================================ 00:08:00.607 Supported: No 00:08:00.607 00:08:00.607 Admin Command Set Attributes 00:08:00.607 ============================ 00:08:00.607 Security Send/Receive: Not Supported 00:08:00.607 Format NVM: Supported 00:08:00.607 Firmware Activate/Download: Not Supported 00:08:00.607 Namespace Management: Supported 00:08:00.607 Device Self-Test: Not Supported 00:08:00.607 Directives: Supported 00:08:00.607 NVMe-MI: Not Supported 00:08:00.607 Virtualization Management: Not Supported 00:08:00.607 Doorbell Buffer Config: Supported 00:08:00.607 Get LBA Status Capability: Not Supported 00:08:00.607 Command & Feature Lockdown Capability: Not Supported 00:08:00.607 Abort Command Limit: 4 00:08:00.607 Async Event Request Limit: 4 00:08:00.607 Number of Firmware Slots: N/A 00:08:00.607 Firmware Slot 1 Read-Only: N/A 00:08:00.607 Firmware Activation Without Reset: N/A 00:08:00.607 Multiple Update Detection Support: N/A 00:08:00.607 Firmware Update Granularity: No Information Provided 00:08:00.607 Per-Namespace SMART Log: Yes 00:08:00.607 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.607 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:00.607 Command Effects Log Page: Supported 00:08:00.607 Get Log Page Extended Data: Supported 00:08:00.607 Telemetry Log Pages: Not Supported 00:08:00.607 Persistent Event Log Pages: Not Supported 00:08:00.607 Supported Log Pages Log Page: May Support 00:08:00.607 Commands Supported & Effects Log Page: Not Supported 00:08:00.607 Feature Identifiers & Effects Log Page:May Support 00:08:00.607 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.607 Data Area 4 for Telemetry Log: Not Supported 00:08:00.607 Error Log Page Entries Supported: 1 00:08:00.607 Keep Alive: Not Supported 00:08:00.607 00:08:00.607 NVM Command Set Attributes 00:08:00.607 ========================== 00:08:00.607 Submission Queue Entry Size 00:08:00.607 Max: 64 00:08:00.607 Min: 64 00:08:00.607 Completion Queue Entry Size 00:08:00.607 Max: 16 00:08:00.607 Min: 16 00:08:00.607 Number of Namespaces: 256 00:08:00.607 Compare Command: Supported 00:08:00.607 Write Uncorrectable Command: Not Supported 00:08:00.607 Dataset Management Command: Supported 00:08:00.607 Write Zeroes Command: Supported 00:08:00.607 Set Features Save Field: Supported 00:08:00.607 Reservations: Not Supported 00:08:00.607 Timestamp: Supported 00:08:00.607 Copy: Supported 00:08:00.607 Volatile Write Cache: Present 00:08:00.607 Atomic Write Unit (Normal): 1 00:08:00.607 Atomic Write Unit (PFail): 1 00:08:00.607 Atomic Compare & Write Unit: 1 00:08:00.607 Fused Compare & Write: Not Supported 00:08:00.607 Scatter-Gather List 00:08:00.607 SGL Command Set: Supported 00:08:00.607 SGL Keyed: Not Supported 00:08:00.607 SGL Bit Bucket Descriptor: Not Supported 00:08:00.607 SGL Metadata Pointer: Not Supported 00:08:00.607 Oversized SGL: Not Supported 00:08:00.607 SGL Metadata Address: Not Supported 00:08:00.607 SGL Offset: Not Supported 00:08:00.607 Transport SGL Data Block: Not Supported 00:08:00.607 Replay Protected Memory Block: Not Supported 00:08:00.607 00:08:00.607 Firmware Slot Information 00:08:00.607 ========================= 00:08:00.607 Active slot: 1 00:08:00.607 Slot 1 Firmware Revision: 1.0 00:08:00.607 00:08:00.607 00:08:00.607 Commands Supported and Effects 00:08:00.607 ============================== 00:08:00.607 Admin Commands 00:08:00.607 -------------- 00:08:00.607 Delete I/O Submission Queue (00h): Supported 00:08:00.607 Create I/O Submission Queue (01h): Supported 00:08:00.607 Get Log Page (02h): Supported 00:08:00.607 Delete I/O Completion Queue (04h): Supported 00:08:00.607 Create I/O Completion Queue (05h): Supported 00:08:00.607 Identify (06h): Supported 00:08:00.607 Abort (08h): Supported 00:08:00.607 Set Features (09h): Supported 00:08:00.607 Get Features (0Ah): Supported 00:08:00.607 Asynchronous Event Request (0Ch): Supported 00:08:00.607 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.607 Directive Send (19h): Supported 00:08:00.607 Directive Receive (1Ah): Supported 00:08:00.607 Virtualization Management (1Ch): Supported 00:08:00.607 Doorbell Buffer Config (7Ch): Supported 00:08:00.607 Format NVM (80h): Supported LBA-Change 00:08:00.607 I/O Commands 00:08:00.607 ------------ 00:08:00.607 Flush (00h): Supported LBA-Change 00:08:00.607 Write (01h): Supported LBA-Change 00:08:00.607 Read (02h): Supported 00:08:00.607 Compare (05h): Supported 00:08:00.607 Write Zeroes (08h): Supported LBA-Change 00:08:00.607 Dataset Management (09h): Supported LBA-Change 00:08:00.607 Unknown (0Ch): Supported 00:08:00.607 Unknown (12h): Supported 00:08:00.607 Copy (19h): Supported LBA-Change 00:08:00.607 Unknown (1Dh): Supported LBA-Change 00:08:00.607 00:08:00.607 Error Log 00:08:00.607 ========= 00:08:00.607 00:08:00.607 Arbitration 00:08:00.607 =========== 00:08:00.607 Arbitration Burst: no limit 00:08:00.607 00:08:00.607 Power Management 00:08:00.607 ================ 00:08:00.607 Number of Power States: 1 00:08:00.607 Current Power State: Power State #0 00:08:00.607 Power State #0: 00:08:00.607 Max Power: 25.00 W 00:08:00.607 Non-Operational State: Operational 00:08:00.607 Entry Latency: 16 microseconds 00:08:00.607 Exit Latency: 4 microseconds 00:08:00.607 Relative Read Throughput: 0 00:08:00.607 Relative Read Latency: 0 00:08:00.607 Relative Write Throughput: 0 00:08:00.607 Relative Write Latency: 0 00:08:00.607 Idle Power: Not Reported 00:08:00.607 Active Power: Not Reported 00:08:00.607 Non-Operational Permissive Mode: Not Supported 00:08:00.607 00:08:00.607 Health Information 00:08:00.607 ================== 00:08:00.607 Critical Warnings: 00:08:00.607 Available Spare Space: OK 00:08:00.607 Temperature: OK 00:08:00.607 Device Reliability: OK 00:08:00.607 Read Only: No 00:08:00.607 Volatile Memory Backup: OK 00:08:00.607 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.607 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.607 Available Spare: 0% 00:08:00.607 Available Spare Threshold: 0% 00:08:00.607 Life Percentage Used: 0% 00:08:00.607 Data Units Read: 778 00:08:00.607 Data Units Written: 707 00:08:00.607 Host Read Commands: 35615 00:08:00.607 Host Write Commands: 35038 00:08:00.607 Controller Busy Time: 0 minutes 00:08:00.607 Power Cycles: 0 00:08:00.607 Power On Hours: 0 hours 00:08:00.607 Unsafe Shutdowns: 0 00:08:00.607 Unrecoverable Media Errors: 0 00:08:00.607 Lifetime Error Log Entries: 0 00:08:00.607 Warning Temperature Time: 0 minutes 00:08:00.608 Critical Temperature Time: 0 minutes 00:08:00.608 00:08:00.608 Number of Queues 00:08:00.608 ================ 00:08:00.608 Number of I/O Submission Queues: 64 00:08:00.608 Number of I/O Completion Queues: 64 00:08:00.608 00:08:00.608 ZNS Specific Controller Data 00:08:00.608 ============================ 00:08:00.608 Zone Append Size Limit: 0 00:08:00.608 00:08:00.608 00:08:00.608 Active Namespaces 00:08:00.608 ================= 00:08:00.608 Namespace ID:1 00:08:00.608 Error Recovery Timeout: Unlimited 00:08:00.608 Command Set Identifier: NVM (00h) 00:08:00.608 Deallocate: Supported 00:08:00.608 Deallocated/Unwritten Error: Supported 00:08:00.608 Deallocated Read Value: All 0x00 00:08:00.608 Deallocate in Write Zeroes: Not Supported 00:08:00.608 Deallocated Guard Field: 0xFFFF 00:08:00.608 Flush: Supported 00:08:00.608 Reservation: Not Supported 00:08:00.608 Namespace Sharing Capabilities: Multiple Controllers 00:08:00.608 Size (in LBAs): 262144 (1GiB) 00:08:00.608 Capacity (in LBAs): 262144 (1GiB) 00:08:00.608 Utilization (in LBAs): 262144 (1GiB) 00:08:00.608 Thin Provisioning: Not Supported 00:08:00.608 Per-NS Atomic Units: No 00:08:00.608 Maximum Single Source Range Length: 128 00:08:00.608 Maximum Copy Length: 128 00:08:00.608 Maximum Source Range Count: 128 00:08:00.608 NGUID/EUI64 Never Reused: No 00:08:00.608 Namespace Write Protected: No 00:08:00.608 Endurance group ID: 1 00:08:00.608 Number of LBA Formats: 8 00:08:00.608 Current LBA Format: LBA Format #04 00:08:00.608 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.608 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.608 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.608 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.608 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.608 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.608 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.608 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.608 00:08:00.608 Get Feature FDP: 00:08:00.608 ================ 00:08:00.608 Enabled: Yes 00:08:00.608 FDP configuration index: 0 00:08:00.608 00:08:00.608 FDP configurations log page 00:08:00.608 =========================== 00:08:00.608 Number of FDP configurations: 1 00:08:00.608 Version: 0 00:08:00.608 Size: 112 00:08:00.608 FDP Configuration Descriptor: 0 00:08:00.608 Descriptor Size: 96 00:08:00.608 Reclaim Group Identifier format: 2 00:08:00.608 FDP Volatile Write Cache: Not Present 00:08:00.608 FDP Configuration: Valid 00:08:00.608 Vendor Specific Size: 0 00:08:00.608 Number of Reclaim Groups: 2 00:08:00.608 Number of Recalim Unit Handles: 8 00:08:00.608 Max Placement Identifiers: 128 00:08:00.608 Number of Namespaces Suppprted: 256 00:08:00.608 Reclaim unit Nominal Size: 6000000 bytes 00:08:00.608 Estimated Reclaim Unit Time Limit: Not Reported 00:08:00.608 RUH Desc #000: RUH Type: Initially Isolated 00:08:00.608 RUH Desc #001: RUH Type: Initially Isolated 00:08:00.608 RUH Desc #002: RUH Type: Initially Isolated 00:08:00.608 RUH Desc #003: RUH Type: Initially Isolated 00:08:00.608 RUH Desc #004: RUH Type: Initially Isolated 00:08:00.608 RUH Desc #005: RUH Type: Initially Isolated 00:08:00.608 RUH Desc #006: RUH Type: Initially Isolated 00:08:00.608 RUH Desc #007: RUH Type: Initially Isolated 00:08:00.608 00:08:00.608 FDP reclaim unit handle usage log page 00:08:00.608 ====================================== 00:08:00.608 Number of Reclaim Unit Handles: 8 00:08:00.608 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:00.608 RUH Usage Desc #001: RUH Attributes: Unused 00:08:00.608 RUH Usage Desc #002: RUH Attributes: Unused 00:08:00.608 RUH Usage Desc #003: RUH Attributes: Unused 00:08:00.608 RUH Usage Desc #004: RUH Attributes: Unused 00:08:00.608 RUH Usage Desc #005: RUH Attributes: Unused 00:08:00.608 RUH Usage Desc #006: RUH Attributes: Unused 00:08:00.608 RUH Usage Desc #007: RUH Attributes: Unused 00:08:00.608 00:08:00.608 FDP statistics log page 00:08:00.608 ======================= 00:08:00.608 Host bytes with metadata written: 443260928 00:08:00.608 Media bytes with metadata written: 443326464 00:08:00.608 Media bytes erased: 0 00:08:00.608 00:08:00.608 FDP events log page 00:08:00.608 =================== 00:08:00.608 Number of FDP events: 0 00:08:00.608 00:08:00.608 NVM Specific Namespace Data 00:08:00.608 =========================== 00:08:00.608 Logical Block Storage Tag Mask: 0 00:08:00.608 Protection Information Capabilities: 00:08:00.608 16b Guard Protection Information Storage Tag Support: No 00:08:00.608 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.608 Storage Tag Check Read Support: No 00:08:00.608 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.608 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.608 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.608 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.608 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.608 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.608 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.608 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.608 ===================================================== 00:08:00.608 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.608 ===================================================== 00:08:00.608 Controller Capabilities/Features 00:08:00.608 ================================ 00:08:00.608 Vendor ID: 1b36 00:08:00.608 Subsystem Vendor ID: 1af4 00:08:00.608 Serial Number: 12340 00:08:00.608 Model Number: QEMU NVMe Ctrl 00:08:00.608 Firmware Version: 8.0.0 00:08:00.608 Recommended Arb Burst: 6 00:08:00.608 IEEE OUI Identifier: 00 54 52 00:08:00.608 Multi-path I/O 00:08:00.608 May have multiple subsystem ports: No 00:08:00.608 May have multiple controllers: No 00:08:00.608 Associated with SR-IOV VF: No 00:08:00.608 Max Data Transfer Size: 524288 00:08:00.608 Max Number of Namespaces: 256 00:08:00.608 Max Number of I/O Queues: 64 00:08:00.608 NVMe Specification Version (VS): 1.4 00:08:00.608 NVMe Specification Version (Identify): 1.4 00:08:00.608 Maximum Queue Entries: 2048 00:08:00.608 Contiguous Queues Required: Yes 00:08:00.608 Arbitration Mechanisms Supported 00:08:00.608 Weighted Round Robin: Not Supported 00:08:00.608 Vendor Specific: Not Supported 00:08:00.608 Reset Timeout: 7500 ms 00:08:00.608 Doorbell Stride: 4 bytes 00:08:00.608 NVM Subsystem Reset: Not Supported 00:08:00.608 Command Sets Supported 00:08:00.608 NVM Command Set: Supported 00:08:00.608 Boot Partition: Not Supported 00:08:00.608 Memory Page Size Minimum: 4096 bytes 00:08:00.608 Memory Page Size Maximum: 65536 bytes 00:08:00.608 Persistent Memory Region: Not Supported 00:08:00.608 Optional Asynchronous Events Supported 00:08:00.608 Namespace Attribute Notices: Supported 00:08:00.608 Firmware Activation Notices: Not Supported 00:08:00.608 ANA Change Notices: Not Supported 00:08:00.608 PLE Aggregate Log Change Notices: Not Supported 00:08:00.608 LBA Status Info Alert Notices: Not Supported 00:08:00.608 EGE Aggregate Log Change Notices: Not Supported 00:08:00.608 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.608 Zone Descriptor Change Notices: Not Supported 00:08:00.608 Discovery Log Change Notices: Not Supported 00:08:00.608 Controller Attributes 00:08:00.608 128-bit Host Identifier: Not Supported 00:08:00.608 Non-Operational Permissive Mode: Not Supported 00:08:00.608 NVM Sets: Not Supported 00:08:00.608 Read Recovery Levels: Not Supported 00:08:00.608 Endurance Groups: Not Supported 00:08:00.608 Predictable Latency Mode: Not Supported 00:08:00.608 Traffic Based Keep ALive: Not Supported 00:08:00.608 Namespace Granularity: Not Supported 00:08:00.608 SQ Associations: Not Supported 00:08:00.608 UUID List: Not Supported 00:08:00.608 Multi-Domain Subsystem: Not Supported 00:08:00.608 Fixed Capacity Management: Not Supported 00:08:00.608 Variable Capacity Management: Not Supported 00:08:00.608 Delete Endurance Group: Not Supported 00:08:00.608 Delete NVM Set: Not Supported 00:08:00.608 Extended LBA Formats Supported: Supported 00:08:00.608 Flexible Data Placement Supported: Not Supported 00:08:00.608 00:08:00.608 Controller Memory Buffer Support 00:08:00.608 ================================ 00:08:00.608 Supported: No 00:08:00.608 00:08:00.608 Persistent Memory Region Support 00:08:00.608 ================================ 00:08:00.608 Supported: No 00:08:00.608 00:08:00.608 Admin Command Set Attributes 00:08:00.608 ============================ 00:08:00.608 Security Send/Receive: Not Supported 00:08:00.609 Format NVM: Supported 00:08:00.609 Firmware Activate/Download: Not Supported 00:08:00.609 Namespace Management: Supported 00:08:00.609 Device Self-Test: Not Supported 00:08:00.609 Directives: Supported 00:08:00.609 NVMe-MI: Not Supported 00:08:00.609 Virtualization Management: Not Supported 00:08:00.609 Doorbell Buffer Config: Supported 00:08:00.609 Get LBA Status Capability: Not Supported 00:08:00.609 Command & Feature Lockdown Capability: Not Supported 00:08:00.609 Abort Command Limit: 4 00:08:00.609 Async Event Request Limit: 4 00:08:00.609 Number of Firmware Slots: N/A 00:08:00.609 Firmware Slot 1 Read-Only: N/A 00:08:00.609 Firmware Activation Without Reset: N/A 00:08:00.609 Multiple Update Detection Support: N/A 00:08:00.609 Firmware Update Granularity: No Information Provided 00:08:00.609 Per-Namespace SMART Log: Yes 00:08:00.609 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.609 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:00.609 Command Effects Log Page: Supported 00:08:00.609 Get Log Page Extended Data: Supported 00:08:00.609 Telemetry Log Pages: Not Supported 00:08:00.609 Persistent Event Log Pages: Not Supported 00:08:00.609 Supported Log Pages Log Page: May Support 00:08:00.609 Commands Supported & Effects Log Page: Not Supported 00:08:00.609 Feature Identifiers & Effects Log Page:May Support 00:08:00.609 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.609 Data Area 4 for Telemetry Log: Not Supported 00:08:00.609 Error Log Page Entries Supported: 1 00:08:00.609 Keep Alive: Not Supported 00:08:00.609 00:08:00.609 NVM Command Set Attributes 00:08:00.609 ========================== 00:08:00.609 Submission Queue Entry Size 00:08:00.609 Max: 64 00:08:00.609 Min: 64 00:08:00.609 Completion Queue Entry Size 00:08:00.609 Max: 16 00:08:00.609 Min: 16 00:08:00.609 Number of Namespaces: 256 00:08:00.609 Compare Command: Supported 00:08:00.609 Write Uncorrectable Command: Not Supported 00:08:00.609 Dataset Management Command: Supported 00:08:00.609 Write Zeroes Command: Supported 00:08:00.609 Set Features Save Field: Supported 00:08:00.609 Reservations: Not Supported 00:08:00.609 Timestamp: Supported 00:08:00.609 Copy: Supported 00:08:00.609 Volatile Write Cache: Present 00:08:00.609 Atomic Write Unit (Normal): 1 00:08:00.609 Atomic Write Unit (PFail): 1 00:08:00.609 Atomic Compare & Write Unit: 1 00:08:00.609 Fused Compare & Write: Not Supported 00:08:00.609 Scatter-Gather List 00:08:00.609 SGL Command Set: Supported 00:08:00.609 SGL Keyed: Not Supported 00:08:00.609 SGL Bit Bucket Descriptor: Not Supported 00:08:00.609 SGL Metadata Pointer: Not Supported 00:08:00.609 Oversized SGL: Not Supported 00:08:00.609 SGL Metadata Address: Not Supported 00:08:00.609 SGL Offset: Not Supported 00:08:00.609 Transport SGL Data Block: Not Supported 00:08:00.609 Replay Protected Memory Block: Not Supported 00:08:00.609 00:08:00.609 Firmware Slot Information 00:08:00.609 ========================= 00:08:00.609 Active slot: 1 00:08:00.609 Slot 1 Firmware Revision: 1.0 00:08:00.609 00:08:00.609 00:08:00.609 Commands Supported and Effects 00:08:00.609 ============================== 00:08:00.609 Admin Commands 00:08:00.609 -------------- 00:08:00.609 Delete I/O Submission Queue (00h): Supported 00:08:00.609 Create I/O Submission Queue (01h): Supported 00:08:00.609 Get Log Page (02h): Supported 00:08:00.609 Delete I/O Completion Queue (04h): Supported 00:08:00.609 Create I/O Completion Queue (05h): Supported 00:08:00.609 Identify (06h): Supported 00:08:00.609 Abort (08h): Supported 00:08:00.609 Set Features (09h): Supported 00:08:00.609 Get Features (0Ah): Supported 00:08:00.609 Asynchronous Event Request (0Ch): Supported 00:08:00.609 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.609 Directive Send (19h): Supported 00:08:00.609 Directive Receive (1Ah): Supported 00:08:00.609 Virtualization Management (1Ch): Supported 00:08:00.609 Doorbell Buffer Config (7Ch): Supported 00:08:00.609 Format NVM (80h): Supported LBA-Change 00:08:00.609 I/O Commands 00:08:00.609 ------------ 00:08:00.609 Flush (00h): Supported LBA-Change 00:08:00.609 Write (01h): Supported LBA-Change 00:08:00.609 Read (02h): Supported 00:08:00.609 Compare (05h): Supported 00:08:00.609 Write Zeroes (08h): Supported LBA-Change 00:08:00.609 Dataset Management (09h): Supported LBA-Change 00:08:00.609 Unknown (0Ch): Supported 00:08:00.609 Unknown (12h): Supported 00:08:00.609 Copy (19h): Supported LBA-Change 00:08:00.609 Unknown (1Dh): Supported LBA-Change 00:08:00.609 00:08:00.609 Error Log 00:08:00.609 ========= 00:08:00.609 00:08:00.609 Arbitration 00:08:00.609 =========== 00:08:00.609 Arbitration Burst: no limit 00:08:00.609 00:08:00.609 Power Management 00:08:00.609 ================ 00:08:00.609 Number of Power States: 1 00:08:00.609 Current Power State: Power State #0 00:08:00.609 Power State #0: 00:08:00.609 Max Power: 25.00 W 00:08:00.609 Non-Operational State: Operational 00:08:00.609 Entry Latency: 16 microseconds 00:08:00.609 Exit Latency: 4 microseconds 00:08:00.609 Relative Read Throughput: 0 00:08:00.609 Relative Read Latency: 0 00:08:00.609 Relative Write Throughput: 0 00:08:00.609 Relative Write Latency: 0 00:08:00.609 Idle Power: Not Reported 00:08:00.609 Active Power: Not Reported 00:08:00.609 Non-Operational Permissive Mode: Not Supported 00:08:00.609 00:08:00.609 Health Information 00:08:00.609 ================== 00:08:00.609 Critical Warnings: 00:08:00.609 Available Spare Space: OK 00:08:00.609 Temperature: OK 00:08:00.609 Device Reliability: OK 00:08:00.609 Read Only: No 00:08:00.609 Volatile Memory Backup: OK 00:08:00.609 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.609 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.609 Available Spare: 0% 00:08:00.609 Available Spare Threshold: 0% 00:08:00.609 Life Percentage Used: 0% 00:08:00.609 Data Units Read: 636 00:08:00.609 Data Units Written: 565 00:08:00.609 Host Read Commands: 33911 00:08:00.609 Host Write Commands: 33697 00:08:00.609 Controller Busy Time: 0 minutes 00:08:00.609 Power Cycles: 0 00:08:00.609 Power On Hours: 0 hours 00:08:00.609 Unsafe Shutdowns: 0 00:08:00.609 Unrecoverable Media Errors: 0 00:08:00.609 Lifetime Error Log Entries: 0 00:08:00.609 Warning Temperature Time: 0 minutes 00:08:00.609 Critical Temperature Time: 0 minutes 00:08:00.609 00:08:00.609 Number of Queues 00:08:00.609 ================ 00:08:00.609 Number of I/O Submission Queues: 64 00:08:00.609 Number of I/O Completion Queues: 64 00:08:00.609 00:08:00.609 ZNS Specific Controller Data 00:08:00.609 ============================ 00:08:00.609 Zone Append Size Limit: 0 00:08:00.609 00:08:00.609 00:08:00.609 Active Namespaces 00:08:00.609 ================= 00:08:00.609 Namespace ID:1 00:08:00.609 Error Recovery Timeout: Unlimited 00:08:00.609 Command Set Identifier: NVM (00h) 00:08:00.609 Deallocate: Supported 00:08:00.609 Deallocated/Unwritten Error: Supported 00:08:00.609 Deallocated Read Value: All 0x00 00:08:00.609 Deallocate in Write Zeroes: Not Supported 00:08:00.609 Deallocated Guard Field: 0xFFFF 00:08:00.609 Flush: Supported 00:08:00.609 Reservation: Not Supported 00:08:00.609 Metadata Transferred as: Separate Metadata Buffer 00:08:00.609 Namespace Sharing Capabilities: Private 00:08:00.609 Size (in LBAs): 1548666 (5GiB) 00:08:00.609 Capacity (in LBAs): 1548666 (5GiB) 00:08:00.609 Utilization (in LBAs): 1548666 (5GiB) 00:08:00.609 Thin Provisioning: Not Supported 00:08:00.609 Per-NS Atomic Units: No 00:08:00.609 Maximum Single Source Range Length: 128 00:08:00.609 Maximum Copy Length: 128 00:08:00.609 Maximum Source Range Count: 128 00:08:00.609 NGUID/EUI64 Never Reused: No 00:08:00.609 Namespace Write Protected: No 00:08:00.609 Number of LBA Formats: 8 00:08:00.609 Current LBA Format: [2024-11-17 04:13:46.090140] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74622 terminated unexpected 00:08:00.609 [2024-11-17 04:13:46.092678] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74622 terminated unexpected 00:08:00.609 [2024-11-17 04:13:46.093646] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74622 terminated unexpected 00:08:00.609 LBA Format #07 00:08:00.609 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.609 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.609 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.609 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.609 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.609 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.609 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.609 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.610 00:08:00.610 NVM Specific Namespace Data 00:08:00.610 =========================== 00:08:00.610 Logical Block Storage Tag Mask: 0 00:08:00.610 Protection Information Capabilities: 00:08:00.610 16b Guard Protection Information Storage Tag Support: No 00:08:00.610 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.610 Storage Tag Check Read Support: No 00:08:00.610 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.610 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.610 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.610 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.610 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.610 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.610 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.610 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.610 ===================================================== 00:08:00.610 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.610 ===================================================== 00:08:00.610 Controller Capabilities/Features 00:08:00.610 ================================ 00:08:00.610 Vendor ID: 1b36 00:08:00.610 Subsystem Vendor ID: 1af4 00:08:00.610 Serial Number: 12341 00:08:00.610 Model Number: QEMU NVMe Ctrl 00:08:00.610 Firmware Version: 8.0.0 00:08:00.610 Recommended Arb Burst: 6 00:08:00.610 IEEE OUI Identifier: 00 54 52 00:08:00.610 Multi-path I/O 00:08:00.610 May have multiple subsystem ports: No 00:08:00.610 May have multiple controllers: No 00:08:00.610 Associated with SR-IOV VF: No 00:08:00.610 Max Data Transfer Size: 524288 00:08:00.610 Max Number of Namespaces: 256 00:08:00.610 Max Number of I/O Queues: 64 00:08:00.610 NVMe Specification Version (VS): 1.4 00:08:00.610 NVMe Specification Version (Identify): 1.4 00:08:00.610 Maximum Queue Entries: 2048 00:08:00.610 Contiguous Queues Required: Yes 00:08:00.610 Arbitration Mechanisms Supported 00:08:00.610 Weighted Round Robin: Not Supported 00:08:00.610 Vendor Specific: Not Supported 00:08:00.610 Reset Timeout: 7500 ms 00:08:00.610 Doorbell Stride: 4 bytes 00:08:00.610 NVM Subsystem Reset: Not Supported 00:08:00.610 Command Sets Supported 00:08:00.610 NVM Command Set: Supported 00:08:00.610 Boot Partition: Not Supported 00:08:00.610 Memory Page Size Minimum: 4096 bytes 00:08:00.610 Memory Page Size Maximum: 65536 bytes 00:08:00.610 Persistent Memory Region: Not Supported 00:08:00.610 Optional Asynchronous Events Supported 00:08:00.610 Namespace Attribute Notices: Supported 00:08:00.610 Firmware Activation Notices: Not Supported 00:08:00.610 ANA Change Notices: Not Supported 00:08:00.610 PLE Aggregate Log Change Notices: Not Supported 00:08:00.610 LBA Status Info Alert Notices: Not Supported 00:08:00.610 EGE Aggregate Log Change Notices: Not Supported 00:08:00.610 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.610 Zone Descriptor Change Notices: Not Supported 00:08:00.610 Discovery Log Change Notices: Not Supported 00:08:00.610 Controller Attributes 00:08:00.610 128-bit Host Identifier: Not Supported 00:08:00.610 Non-Operational Permissive Mode: Not Supported 00:08:00.610 NVM Sets: Not Supported 00:08:00.610 Read Recovery Levels: Not Supported 00:08:00.610 Endurance Groups: Not Supported 00:08:00.610 Predictable Latency Mode: Not Supported 00:08:00.610 Traffic Based Keep ALive: Not Supported 00:08:00.610 Namespace Granularity: Not Supported 00:08:00.610 SQ Associations: Not Supported 00:08:00.610 UUID List: Not Supported 00:08:00.610 Multi-Domain Subsystem: Not Supported 00:08:00.610 Fixed Capacity Management: Not Supported 00:08:00.610 Variable Capacity Management: Not Supported 00:08:00.610 Delete Endurance Group: Not Supported 00:08:00.610 Delete NVM Set: Not Supported 00:08:00.610 Extended LBA Formats Supported: Supported 00:08:00.610 Flexible Data Placement Supported: Not Supported 00:08:00.610 00:08:00.610 Controller Memory Buffer Support 00:08:00.610 ================================ 00:08:00.610 Supported: No 00:08:00.610 00:08:00.610 Persistent Memory Region Support 00:08:00.610 ================================ 00:08:00.610 Supported: No 00:08:00.610 00:08:00.610 Admin Command Set Attributes 00:08:00.610 ============================ 00:08:00.610 Security Send/Receive: Not Supported 00:08:00.610 Format NVM: Supported 00:08:00.610 Firmware Activate/Download: Not Supported 00:08:00.610 Namespace Management: Supported 00:08:00.610 Device Self-Test: Not Supported 00:08:00.610 Directives: Supported 00:08:00.610 NVMe-MI: Not Supported 00:08:00.610 Virtualization Management: Not Supported 00:08:00.610 Doorbell Buffer Config: Supported 00:08:00.610 Get LBA Status Capability: Not Supported 00:08:00.610 Command & Feature Lockdown Capability: Not Supported 00:08:00.610 Abort Command Limit: 4 00:08:00.610 Async Event Request Limit: 4 00:08:00.610 Number of Firmware Slots: N/A 00:08:00.610 Firmware Slot 1 Read-Only: N/A 00:08:00.610 Firmware Activation Without Reset: N/A 00:08:00.610 Multiple Update Detection Support: N/A 00:08:00.610 Firmware Update Granularity: No Information Provided 00:08:00.610 Per-Namespace SMART Log: Yes 00:08:00.610 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.610 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:00.610 Command Effects Log Page: Supported 00:08:00.610 Get Log Page Extended Data: Supported 00:08:00.610 Telemetry Log Pages: Not Supported 00:08:00.610 Persistent Event Log Pages: Not Supported 00:08:00.610 Supported Log Pages Log Page: May Support 00:08:00.610 Commands Supported & Effects Log Page: Not Supported 00:08:00.610 Feature Identifiers & Effects Log Page:May Support 00:08:00.610 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.610 Data Area 4 for Telemetry Log: Not Supported 00:08:00.610 Error Log Page Entries Supported: 1 00:08:00.610 Keep Alive: Not Supported 00:08:00.610 00:08:00.610 NVM Command Set Attributes 00:08:00.610 ========================== 00:08:00.610 Submission Queue Entry Size 00:08:00.610 Max: 64 00:08:00.610 Min: 64 00:08:00.610 Completion Queue Entry Size 00:08:00.610 Max: 16 00:08:00.610 Min: 16 00:08:00.610 Number of Namespaces: 256 00:08:00.610 Compare Command: Supported 00:08:00.610 Write Uncorrectable Command: Not Supported 00:08:00.610 Dataset Management Command: Supported 00:08:00.610 Write Zeroes Command: Supported 00:08:00.610 Set Features Save Field: Supported 00:08:00.610 Reservations: Not Supported 00:08:00.610 Timestamp: Supported 00:08:00.610 Copy: Supported 00:08:00.610 Volatile Write Cache: Present 00:08:00.610 Atomic Write Unit (Normal): 1 00:08:00.610 Atomic Write Unit (PFail): 1 00:08:00.610 Atomic Compare & Write Unit: 1 00:08:00.610 Fused Compare & Write: Not Supported 00:08:00.610 Scatter-Gather List 00:08:00.610 SGL Command Set: Supported 00:08:00.610 SGL Keyed: Not Supported 00:08:00.610 SGL Bit Bucket Descriptor: Not Supported 00:08:00.610 SGL Metadata Pointer: Not Supported 00:08:00.610 Oversized SGL: Not Supported 00:08:00.610 SGL Metadata Address: Not Supported 00:08:00.610 SGL Offset: Not Supported 00:08:00.610 Transport SGL Data Block: Not Supported 00:08:00.610 Replay Protected Memory Block: Not Supported 00:08:00.610 00:08:00.610 Firmware Slot Information 00:08:00.610 ========================= 00:08:00.610 Active slot: 1 00:08:00.610 Slot 1 Firmware Revision: 1.0 00:08:00.610 00:08:00.610 00:08:00.610 Commands Supported and Effects 00:08:00.610 ============================== 00:08:00.610 Admin Commands 00:08:00.610 -------------- 00:08:00.610 Delete I/O Submission Queue (00h): Supported 00:08:00.610 Create I/O Submission Queue (01h): Supported 00:08:00.610 Get Log Page (02h): Supported 00:08:00.610 Delete I/O Completion Queue (04h): Supported 00:08:00.610 Create I/O Completion Queue (05h): Supported 00:08:00.610 Identify (06h): Supported 00:08:00.610 Abort (08h): Supported 00:08:00.610 Set Features (09h): Supported 00:08:00.610 Get Features (0Ah): Supported 00:08:00.610 Asynchronous Event Request (0Ch): Supported 00:08:00.610 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.610 Directive Send (19h): Supported 00:08:00.610 Directive Receive (1Ah): Supported 00:08:00.610 Virtualization Management (1Ch): Supported 00:08:00.610 Doorbell Buffer Config (7Ch): Supported 00:08:00.610 Format NVM (80h): Supported LBA-Change 00:08:00.610 I/O Commands 00:08:00.610 ------------ 00:08:00.610 Flush (00h): Supported LBA-Change 00:08:00.611 Write (01h): Supported LBA-Change 00:08:00.611 Read (02h): Supported 00:08:00.611 Compare (05h): Supported 00:08:00.611 Write Zeroes (08h): Supported LBA-Change 00:08:00.611 Dataset Management (09h): Supported LBA-Change 00:08:00.611 Unknown (0Ch): Supported 00:08:00.611 Unknown (12h): Supported 00:08:00.611 Copy (19h): Supported LBA-Change 00:08:00.611 Unknown (1Dh): Supported LBA-Change 00:08:00.611 00:08:00.611 Error Log 00:08:00.611 ========= 00:08:00.611 00:08:00.611 Arbitration 00:08:00.611 =========== 00:08:00.611 Arbitration Burst: no limit 00:08:00.611 00:08:00.611 Power Management 00:08:00.611 ================ 00:08:00.611 Number of Power States: 1 00:08:00.611 Current Power State: Power State #0 00:08:00.611 Power State #0: 00:08:00.611 Max Power: 25.00 W 00:08:00.611 Non-Operational State: Operational 00:08:00.611 Entry Latency: 16 microseconds 00:08:00.611 Exit Latency: 4 microseconds 00:08:00.611 Relative Read Throughput: 0 00:08:00.611 Relative Read Latency: 0 00:08:00.611 Relative Write Throughput: 0 00:08:00.611 Relative Write Latency: 0 00:08:00.611 Idle Power: Not Reported 00:08:00.611 Active Power: Not Reported 00:08:00.611 Non-Operational Permissive Mode: Not Supported 00:08:00.611 00:08:00.611 Health Information 00:08:00.611 ================== 00:08:00.611 Critical Warnings: 00:08:00.611 Available Spare Space: OK 00:08:00.611 Temperature: OK 00:08:00.611 Device Reliability: OK 00:08:00.611 Read Only: No 00:08:00.611 Volatile Memory Backup: OK 00:08:00.611 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.611 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.611 Available Spare: 0% 00:08:00.611 Available Spare Threshold: 0% 00:08:00.611 Life Percentage Used: 0% 00:08:00.611 Data Units Read: 962 00:08:00.611 Data Units Written: 822 00:08:00.611 Host Read Commands: 51175 00:08:00.611 Host Write Commands: 49871 00:08:00.611 Controller Busy Time: 0 minutes 00:08:00.611 Power Cycles: 0 00:08:00.611 Power On Hours: 0 hours 00:08:00.611 Unsafe Shutdowns: 0 00:08:00.611 Unrecoverable Media Errors: 0 00:08:00.611 Lifetime Error Log Entries: 0 00:08:00.611 Warning Temperature Time: 0 minutes 00:08:00.611 Critical Temperature Time: 0 minutes 00:08:00.611 00:08:00.611 Number of Queues 00:08:00.611 ================ 00:08:00.611 Number of I/O Submission Queues: 64 00:08:00.611 Number of I/O Completion Queues: 64 00:08:00.611 00:08:00.611 ZNS Specific Controller Data 00:08:00.611 ============================ 00:08:00.611 Zone Append Size Limit: 0 00:08:00.611 00:08:00.611 00:08:00.611 Active Namespaces 00:08:00.611 ================= 00:08:00.611 Namespace ID:1 00:08:00.611 Error Recovery Timeout: Unlimited 00:08:00.611 Command Set Identifier: NVM (00h) 00:08:00.611 Deallocate: Supported 00:08:00.611 Deallocated/Unwritten Error: Supported 00:08:00.611 Deallocated Read Value: All 0x00 00:08:00.611 Deallocate in Write Zeroes: Not Supported 00:08:00.611 Deallocated Guard Field: 0xFFFF 00:08:00.611 Flush: Supported 00:08:00.611 Reservation: Not Supported 00:08:00.611 Namespace Sharing Capabilities: Private 00:08:00.611 Size (in LBAs): 1310720 (5GiB) 00:08:00.611 Capacity (in LBAs): 1310720 (5GiB) 00:08:00.611 Utilization (in LBAs): 1310720 (5GiB) 00:08:00.611 Thin Provisioning: Not Supported 00:08:00.611 Per-NS Atomic Units: No 00:08:00.611 Maximum Single Source Range Length: 128 00:08:00.611 Maximum Copy Length: 128 00:08:00.611 Maximum Source Range Count: 128 00:08:00.611 NGUID/EUI64 Never Reused: No 00:08:00.611 Namespace Write Protected: No 00:08:00.611 Number of LBA Formats: 8 00:08:00.611 Current LBA Format: LBA Format #04 00:08:00.611 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.611 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.611 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.611 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.611 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.611 LBA Format[2024-11-17 04:13:46.094925] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74622 terminated unexpected 00:08:00.611 #05: Data Size: 4096 Metadata Size: 8 00:08:00.611 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.611 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.611 00:08:00.611 NVM Specific Namespace Data 00:08:00.611 =========================== 00:08:00.611 Logical Block Storage Tag Mask: 0 00:08:00.611 Protection Information Capabilities: 00:08:00.611 16b Guard Protection Information Storage Tag Support: No 00:08:00.611 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.611 Storage Tag Check Read Support: No 00:08:00.611 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.611 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.611 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.611 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.611 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.611 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.611 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.611 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.611 ===================================================== 00:08:00.611 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.611 ===================================================== 00:08:00.611 Controller Capabilities/Features 00:08:00.611 ================================ 00:08:00.611 Vendor ID: 1b36 00:08:00.611 Subsystem Vendor ID: 1af4 00:08:00.611 Serial Number: 12342 00:08:00.611 Model Number: QEMU NVMe Ctrl 00:08:00.611 Firmware Version: 8.0.0 00:08:00.611 Recommended Arb Burst: 6 00:08:00.611 IEEE OUI Identifier: 00 54 52 00:08:00.611 Multi-path I/O 00:08:00.611 May have multiple subsystem ports: No 00:08:00.611 May have multiple controllers: No 00:08:00.611 Associated with SR-IOV VF: No 00:08:00.611 Max Data Transfer Size: 524288 00:08:00.611 Max Number of Namespaces: 256 00:08:00.611 Max Number of I/O Queues: 64 00:08:00.611 NVMe Specification Version (VS): 1.4 00:08:00.611 NVMe Specification Version (Identify): 1.4 00:08:00.611 Maximum Queue Entries: 2048 00:08:00.611 Contiguous Queues Required: Yes 00:08:00.611 Arbitration Mechanisms Supported 00:08:00.611 Weighted Round Robin: Not Supported 00:08:00.611 Vendor Specific: Not Supported 00:08:00.611 Reset Timeout: 7500 ms 00:08:00.611 Doorbell Stride: 4 bytes 00:08:00.611 NVM Subsystem Reset: Not Supported 00:08:00.611 Command Sets Supported 00:08:00.611 NVM Command Set: Supported 00:08:00.611 Boot Partition: Not Supported 00:08:00.611 Memory Page Size Minimum: 4096 bytes 00:08:00.611 Memory Page Size Maximum: 65536 bytes 00:08:00.611 Persistent Memory Region: Not Supported 00:08:00.611 Optional Asynchronous Events Supported 00:08:00.611 Namespace Attribute Notices: Supported 00:08:00.611 Firmware Activation Notices: Not Supported 00:08:00.611 ANA Change Notices: Not Supported 00:08:00.611 PLE Aggregate Log Change Notices: Not Supported 00:08:00.612 LBA Status Info Alert Notices: Not Supported 00:08:00.612 EGE Aggregate Log Change Notices: Not Supported 00:08:00.612 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.612 Zone Descriptor Change Notices: Not Supported 00:08:00.612 Discovery Log Change Notices: Not Supported 00:08:00.612 Controller Attributes 00:08:00.612 128-bit Host Identifier: Not Supported 00:08:00.612 Non-Operational Permissive Mode: Not Supported 00:08:00.612 NVM Sets: Not Supported 00:08:00.612 Read Recovery Levels: Not Supported 00:08:00.612 Endurance Groups: Not Supported 00:08:00.612 Predictable Latency Mode: Not Supported 00:08:00.612 Traffic Based Keep ALive: Not Supported 00:08:00.612 Namespace Granularity: Not Supported 00:08:00.612 SQ Associations: Not Supported 00:08:00.612 UUID List: Not Supported 00:08:00.612 Multi-Domain Subsystem: Not Supported 00:08:00.612 Fixed Capacity Management: Not Supported 00:08:00.612 Variable Capacity Management: Not Supported 00:08:00.612 Delete Endurance Group: Not Supported 00:08:00.612 Delete NVM Set: Not Supported 00:08:00.612 Extended LBA Formats Supported: Supported 00:08:00.612 Flexible Data Placement Supported: Not Supported 00:08:00.612 00:08:00.612 Controller Memory Buffer Support 00:08:00.612 ================================ 00:08:00.612 Supported: No 00:08:00.612 00:08:00.612 Persistent Memory Region Support 00:08:00.612 ================================ 00:08:00.612 Supported: No 00:08:00.612 00:08:00.612 Admin Command Set Attributes 00:08:00.612 ============================ 00:08:00.612 Security Send/Receive: Not Supported 00:08:00.612 Format NVM: Supported 00:08:00.612 Firmware Activate/Download: Not Supported 00:08:00.612 Namespace Management: Supported 00:08:00.612 Device Self-Test: Not Supported 00:08:00.612 Directives: Supported 00:08:00.612 NVMe-MI: Not Supported 00:08:00.612 Virtualization Management: Not Supported 00:08:00.612 Doorbell Buffer Config: Supported 00:08:00.612 Get LBA Status Capability: Not Supported 00:08:00.612 Command & Feature Lockdown Capability: Not Supported 00:08:00.612 Abort Command Limit: 4 00:08:00.612 Async Event Request Limit: 4 00:08:00.612 Number of Firmware Slots: N/A 00:08:00.612 Firmware Slot 1 Read-Only: N/A 00:08:00.612 Firmware Activation Without Reset: N/A 00:08:00.612 Multiple Update Detection Support: N/A 00:08:00.612 Firmware Update Granularity: No Information Provided 00:08:00.612 Per-Namespace SMART Log: Yes 00:08:00.612 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.612 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:00.612 Command Effects Log Page: Supported 00:08:00.612 Get Log Page Extended Data: Supported 00:08:00.612 Telemetry Log Pages: Not Supported 00:08:00.612 Persistent Event Log Pages: Not Supported 00:08:00.612 Supported Log Pages Log Page: May Support 00:08:00.612 Commands Supported & Effects Log Page: Not Supported 00:08:00.612 Feature Identifiers & Effects Log Page:May Support 00:08:00.612 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.612 Data Area 4 for Telemetry Log: Not Supported 00:08:00.612 Error Log Page Entries Supported: 1 00:08:00.612 Keep Alive: Not Supported 00:08:00.612 00:08:00.612 NVM Command Set Attributes 00:08:00.612 ========================== 00:08:00.612 Submission Queue Entry Size 00:08:00.612 Max: 64 00:08:00.612 Min: 64 00:08:00.612 Completion Queue Entry Size 00:08:00.612 Max: 16 00:08:00.612 Min: 16 00:08:00.612 Number of Namespaces: 256 00:08:00.612 Compare Command: Supported 00:08:00.612 Write Uncorrectable Command: Not Supported 00:08:00.612 Dataset Management Command: Supported 00:08:00.612 Write Zeroes Command: Supported 00:08:00.612 Set Features Save Field: Supported 00:08:00.612 Reservations: Not Supported 00:08:00.612 Timestamp: Supported 00:08:00.612 Copy: Supported 00:08:00.612 Volatile Write Cache: Present 00:08:00.612 Atomic Write Unit (Normal): 1 00:08:00.612 Atomic Write Unit (PFail): 1 00:08:00.612 Atomic Compare & Write Unit: 1 00:08:00.612 Fused Compare & Write: Not Supported 00:08:00.612 Scatter-Gather List 00:08:00.612 SGL Command Set: Supported 00:08:00.612 SGL Keyed: Not Supported 00:08:00.612 SGL Bit Bucket Descriptor: Not Supported 00:08:00.612 SGL Metadata Pointer: Not Supported 00:08:00.612 Oversized SGL: Not Supported 00:08:00.612 SGL Metadata Address: Not Supported 00:08:00.612 SGL Offset: Not Supported 00:08:00.612 Transport SGL Data Block: Not Supported 00:08:00.612 Replay Protected Memory Block: Not Supported 00:08:00.612 00:08:00.612 Firmware Slot Information 00:08:00.612 ========================= 00:08:00.612 Active slot: 1 00:08:00.612 Slot 1 Firmware Revision: 1.0 00:08:00.612 00:08:00.612 00:08:00.612 Commands Supported and Effects 00:08:00.612 ============================== 00:08:00.612 Admin Commands 00:08:00.612 -------------- 00:08:00.612 Delete I/O Submission Queue (00h): Supported 00:08:00.612 Create I/O Submission Queue (01h): Supported 00:08:00.612 Get Log Page (02h): Supported 00:08:00.612 Delete I/O Completion Queue (04h): Supported 00:08:00.612 Create I/O Completion Queue (05h): Supported 00:08:00.612 Identify (06h): Supported 00:08:00.612 Abort (08h): Supported 00:08:00.612 Set Features (09h): Supported 00:08:00.612 Get Features (0Ah): Supported 00:08:00.612 Asynchronous Event Request (0Ch): Supported 00:08:00.612 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.612 Directive Send (19h): Supported 00:08:00.612 Directive Receive (1Ah): Supported 00:08:00.612 Virtualization Management (1Ch): Supported 00:08:00.612 Doorbell Buffer Config (7Ch): Supported 00:08:00.612 Format NVM (80h): Supported LBA-Change 00:08:00.612 I/O Commands 00:08:00.612 ------------ 00:08:00.612 Flush (00h): Supported LBA-Change 00:08:00.612 Write (01h): Supported LBA-Change 00:08:00.612 Read (02h): Supported 00:08:00.612 Compare (05h): Supported 00:08:00.612 Write Zeroes (08h): Supported LBA-Change 00:08:00.612 Dataset Management (09h): Supported LBA-Change 00:08:00.612 Unknown (0Ch): Supported 00:08:00.612 Unknown (12h): Supported 00:08:00.612 Copy (19h): Supported LBA-Change 00:08:00.612 Unknown (1Dh): Supported LBA-Change 00:08:00.612 00:08:00.612 Error Log 00:08:00.612 ========= 00:08:00.612 00:08:00.612 Arbitration 00:08:00.612 =========== 00:08:00.612 Arbitration Burst: no limit 00:08:00.612 00:08:00.612 Power Management 00:08:00.612 ================ 00:08:00.612 Number of Power States: 1 00:08:00.612 Current Power State: Power State #0 00:08:00.612 Power State #0: 00:08:00.612 Max Power: 25.00 W 00:08:00.612 Non-Operational State: Operational 00:08:00.612 Entry Latency: 16 microseconds 00:08:00.612 Exit Latency: 4 microseconds 00:08:00.612 Relative Read Throughput: 0 00:08:00.612 Relative Read Latency: 0 00:08:00.612 Relative Write Throughput: 0 00:08:00.612 Relative Write Latency: 0 00:08:00.612 Idle Power: Not Reported 00:08:00.612 Active Power: Not Reported 00:08:00.612 Non-Operational Permissive Mode: Not Supported 00:08:00.612 00:08:00.612 Health Information 00:08:00.612 ================== 00:08:00.612 Critical Warnings: 00:08:00.612 Available Spare Space: OK 00:08:00.612 Temperature: OK 00:08:00.612 Device Reliability: OK 00:08:00.612 Read Only: No 00:08:00.612 Volatile Memory Backup: OK 00:08:00.612 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.612 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.612 Available Spare: 0% 00:08:00.612 Available Spare Threshold: 0% 00:08:00.612 Life Percentage Used: 0% 00:08:00.612 Data Units Read: 2064 00:08:00.612 Data Units Written: 1851 00:08:00.612 Host Read Commands: 104446 00:08:00.612 Host Write Commands: 102715 00:08:00.612 Controller Busy Time: 0 minutes 00:08:00.612 Power Cycles: 0 00:08:00.612 Power On Hours: 0 hours 00:08:00.612 Unsafe Shutdowns: 0 00:08:00.612 Unrecoverable Media Errors: 0 00:08:00.612 Lifetime Error Log Entries: 0 00:08:00.612 Warning Temperature Time: 0 minutes 00:08:00.612 Critical Temperature Time: 0 minutes 00:08:00.612 00:08:00.612 Number of Queues 00:08:00.612 ================ 00:08:00.612 Number of I/O Submission Queues: 64 00:08:00.612 Number of I/O Completion Queues: 64 00:08:00.612 00:08:00.612 ZNS Specific Controller Data 00:08:00.612 ============================ 00:08:00.612 Zone Append Size Limit: 0 00:08:00.612 00:08:00.613 00:08:00.613 Active Namespaces 00:08:00.613 ================= 00:08:00.613 Namespace ID:1 00:08:00.613 Error Recovery Timeout: Unlimited 00:08:00.613 Command Set Identifier: NVM (00h) 00:08:00.613 Deallocate: Supported 00:08:00.613 Deallocated/Unwritten Error: Supported 00:08:00.613 Deallocated Read Value: All 0x00 00:08:00.613 Deallocate in Write Zeroes: Not Supported 00:08:00.613 Deallocated Guard Field: 0xFFFF 00:08:00.613 Flush: Supported 00:08:00.613 Reservation: Not Supported 00:08:00.613 Namespace Sharing Capabilities: Private 00:08:00.613 Size (in LBAs): 1048576 (4GiB) 00:08:00.613 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.613 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.613 Thin Provisioning: Not Supported 00:08:00.613 Per-NS Atomic Units: No 00:08:00.613 Maximum Single Source Range Length: 128 00:08:00.613 Maximum Copy Length: 128 00:08:00.613 Maximum Source Range Count: 128 00:08:00.613 NGUID/EUI64 Never Reused: No 00:08:00.613 Namespace Write Protected: No 00:08:00.613 Number of LBA Formats: 8 00:08:00.613 Current LBA Format: LBA Format #04 00:08:00.613 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.613 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.613 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.613 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.613 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.613 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.613 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.613 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.613 00:08:00.613 NVM Specific Namespace Data 00:08:00.613 =========================== 00:08:00.613 Logical Block Storage Tag Mask: 0 00:08:00.613 Protection Information Capabilities: 00:08:00.613 16b Guard Protection Information Storage Tag Support: No 00:08:00.613 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.613 Storage Tag Check Read Support: No 00:08:00.613 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Namespace ID:2 00:08:00.613 Error Recovery Timeout: Unlimited 00:08:00.613 Command Set Identifier: NVM (00h) 00:08:00.613 Deallocate: Supported 00:08:00.613 Deallocated/Unwritten Error: Supported 00:08:00.613 Deallocated Read Value: All 0x00 00:08:00.613 Deallocate in Write Zeroes: Not Supported 00:08:00.613 Deallocated Guard Field: 0xFFFF 00:08:00.613 Flush: Supported 00:08:00.613 Reservation: Not Supported 00:08:00.613 Namespace Sharing Capabilities: Private 00:08:00.613 Size (in LBAs): 1048576 (4GiB) 00:08:00.613 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.613 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.613 Thin Provisioning: Not Supported 00:08:00.613 Per-NS Atomic Units: No 00:08:00.613 Maximum Single Source Range Length: 128 00:08:00.613 Maximum Copy Length: 128 00:08:00.613 Maximum Source Range Count: 128 00:08:00.613 NGUID/EUI64 Never Reused: No 00:08:00.613 Namespace Write Protected: No 00:08:00.613 Number of LBA Formats: 8 00:08:00.613 Current LBA Format: LBA Format #04 00:08:00.613 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.613 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.613 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.613 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.613 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.613 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.613 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.613 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.613 00:08:00.613 NVM Specific Namespace Data 00:08:00.613 =========================== 00:08:00.613 Logical Block Storage Tag Mask: 0 00:08:00.613 Protection Information Capabilities: 00:08:00.613 16b Guard Protection Information Storage Tag Support: No 00:08:00.613 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.613 Storage Tag Check Read Support: No 00:08:00.613 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Namespace ID:3 00:08:00.613 Error Recovery Timeout: Unlimited 00:08:00.613 Command Set Identifier: NVM (00h) 00:08:00.613 Deallocate: Supported 00:08:00.613 Deallocated/Unwritten Error: Supported 00:08:00.613 Deallocated Read Value: All 0x00 00:08:00.613 Deallocate in Write Zeroes: Not Supported 00:08:00.613 Deallocated Guard Field: 0xFFFF 00:08:00.613 Flush: Supported 00:08:00.613 Reservation: Not Supported 00:08:00.613 Namespace Sharing Capabilities: Private 00:08:00.613 Size (in LBAs): 1048576 (4GiB) 00:08:00.613 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.613 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.613 Thin Provisioning: Not Supported 00:08:00.613 Per-NS Atomic Units: No 00:08:00.613 Maximum Single Source Range Length: 128 00:08:00.613 Maximum Copy Length: 128 00:08:00.613 Maximum Source Range Count: 128 00:08:00.613 NGUID/EUI64 Never Reused: No 00:08:00.613 Namespace Write Protected: No 00:08:00.613 Number of LBA Formats: 8 00:08:00.613 Current LBA Format: LBA Format #04 00:08:00.613 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.613 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.613 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.613 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.613 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.613 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.613 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.613 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.613 00:08:00.613 NVM Specific Namespace Data 00:08:00.613 =========================== 00:08:00.613 Logical Block Storage Tag Mask: 0 00:08:00.613 Protection Information Capabilities: 00:08:00.613 16b Guard Protection Information Storage Tag Support: No 00:08:00.613 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.613 Storage Tag Check Read Support: No 00:08:00.613 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.613 04:13:46 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:00.613 04:13:46 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:00.613 ===================================================== 00:08:00.613 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.613 ===================================================== 00:08:00.613 Controller Capabilities/Features 00:08:00.613 ================================ 00:08:00.613 Vendor ID: 1b36 00:08:00.613 Subsystem Vendor ID: 1af4 00:08:00.613 Serial Number: 12340 00:08:00.613 Model Number: QEMU NVMe Ctrl 00:08:00.613 Firmware Version: 8.0.0 00:08:00.613 Recommended Arb Burst: 6 00:08:00.613 IEEE OUI Identifier: 00 54 52 00:08:00.613 Multi-path I/O 00:08:00.613 May have multiple subsystem ports: No 00:08:00.613 May have multiple controllers: No 00:08:00.613 Associated with SR-IOV VF: No 00:08:00.614 Max Data Transfer Size: 524288 00:08:00.614 Max Number of Namespaces: 256 00:08:00.614 Max Number of I/O Queues: 64 00:08:00.614 NVMe Specification Version (VS): 1.4 00:08:00.614 NVMe Specification Version (Identify): 1.4 00:08:00.614 Maximum Queue Entries: 2048 00:08:00.614 Contiguous Queues Required: Yes 00:08:00.614 Arbitration Mechanisms Supported 00:08:00.614 Weighted Round Robin: Not Supported 00:08:00.614 Vendor Specific: Not Supported 00:08:00.614 Reset Timeout: 7500 ms 00:08:00.614 Doorbell Stride: 4 bytes 00:08:00.614 NVM Subsystem Reset: Not Supported 00:08:00.614 Command Sets Supported 00:08:00.614 NVM Command Set: Supported 00:08:00.614 Boot Partition: Not Supported 00:08:00.614 Memory Page Size Minimum: 4096 bytes 00:08:00.614 Memory Page Size Maximum: 65536 bytes 00:08:00.614 Persistent Memory Region: Not Supported 00:08:00.614 Optional Asynchronous Events Supported 00:08:00.614 Namespace Attribute Notices: Supported 00:08:00.614 Firmware Activation Notices: Not Supported 00:08:00.614 ANA Change Notices: Not Supported 00:08:00.614 PLE Aggregate Log Change Notices: Not Supported 00:08:00.614 LBA Status Info Alert Notices: Not Supported 00:08:00.614 EGE Aggregate Log Change Notices: Not Supported 00:08:00.614 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.614 Zone Descriptor Change Notices: Not Supported 00:08:00.614 Discovery Log Change Notices: Not Supported 00:08:00.614 Controller Attributes 00:08:00.614 128-bit Host Identifier: Not Supported 00:08:00.614 Non-Operational Permissive Mode: Not Supported 00:08:00.614 NVM Sets: Not Supported 00:08:00.614 Read Recovery Levels: Not Supported 00:08:00.614 Endurance Groups: Not Supported 00:08:00.614 Predictable Latency Mode: Not Supported 00:08:00.614 Traffic Based Keep ALive: Not Supported 00:08:00.614 Namespace Granularity: Not Supported 00:08:00.614 SQ Associations: Not Supported 00:08:00.614 UUID List: Not Supported 00:08:00.614 Multi-Domain Subsystem: Not Supported 00:08:00.614 Fixed Capacity Management: Not Supported 00:08:00.614 Variable Capacity Management: Not Supported 00:08:00.614 Delete Endurance Group: Not Supported 00:08:00.614 Delete NVM Set: Not Supported 00:08:00.614 Extended LBA Formats Supported: Supported 00:08:00.614 Flexible Data Placement Supported: Not Supported 00:08:00.614 00:08:00.614 Controller Memory Buffer Support 00:08:00.614 ================================ 00:08:00.614 Supported: No 00:08:00.614 00:08:00.614 Persistent Memory Region Support 00:08:00.614 ================================ 00:08:00.614 Supported: No 00:08:00.614 00:08:00.614 Admin Command Set Attributes 00:08:00.614 ============================ 00:08:00.614 Security Send/Receive: Not Supported 00:08:00.614 Format NVM: Supported 00:08:00.614 Firmware Activate/Download: Not Supported 00:08:00.614 Namespace Management: Supported 00:08:00.614 Device Self-Test: Not Supported 00:08:00.614 Directives: Supported 00:08:00.614 NVMe-MI: Not Supported 00:08:00.614 Virtualization Management: Not Supported 00:08:00.614 Doorbell Buffer Config: Supported 00:08:00.614 Get LBA Status Capability: Not Supported 00:08:00.614 Command & Feature Lockdown Capability: Not Supported 00:08:00.614 Abort Command Limit: 4 00:08:00.614 Async Event Request Limit: 4 00:08:00.614 Number of Firmware Slots: N/A 00:08:00.614 Firmware Slot 1 Read-Only: N/A 00:08:00.614 Firmware Activation Without Reset: N/A 00:08:00.614 Multiple Update Detection Support: N/A 00:08:00.614 Firmware Update Granularity: No Information Provided 00:08:00.614 Per-Namespace SMART Log: Yes 00:08:00.614 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.614 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:00.614 Command Effects Log Page: Supported 00:08:00.614 Get Log Page Extended Data: Supported 00:08:00.614 Telemetry Log Pages: Not Supported 00:08:00.614 Persistent Event Log Pages: Not Supported 00:08:00.614 Supported Log Pages Log Page: May Support 00:08:00.614 Commands Supported & Effects Log Page: Not Supported 00:08:00.614 Feature Identifiers & Effects Log Page:May Support 00:08:00.614 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.614 Data Area 4 for Telemetry Log: Not Supported 00:08:00.614 Error Log Page Entries Supported: 1 00:08:00.614 Keep Alive: Not Supported 00:08:00.614 00:08:00.614 NVM Command Set Attributes 00:08:00.614 ========================== 00:08:00.614 Submission Queue Entry Size 00:08:00.614 Max: 64 00:08:00.614 Min: 64 00:08:00.614 Completion Queue Entry Size 00:08:00.614 Max: 16 00:08:00.614 Min: 16 00:08:00.614 Number of Namespaces: 256 00:08:00.614 Compare Command: Supported 00:08:00.614 Write Uncorrectable Command: Not Supported 00:08:00.614 Dataset Management Command: Supported 00:08:00.614 Write Zeroes Command: Supported 00:08:00.614 Set Features Save Field: Supported 00:08:00.614 Reservations: Not Supported 00:08:00.614 Timestamp: Supported 00:08:00.614 Copy: Supported 00:08:00.614 Volatile Write Cache: Present 00:08:00.614 Atomic Write Unit (Normal): 1 00:08:00.614 Atomic Write Unit (PFail): 1 00:08:00.614 Atomic Compare & Write Unit: 1 00:08:00.614 Fused Compare & Write: Not Supported 00:08:00.614 Scatter-Gather List 00:08:00.614 SGL Command Set: Supported 00:08:00.614 SGL Keyed: Not Supported 00:08:00.614 SGL Bit Bucket Descriptor: Not Supported 00:08:00.614 SGL Metadata Pointer: Not Supported 00:08:00.614 Oversized SGL: Not Supported 00:08:00.614 SGL Metadata Address: Not Supported 00:08:00.614 SGL Offset: Not Supported 00:08:00.614 Transport SGL Data Block: Not Supported 00:08:00.614 Replay Protected Memory Block: Not Supported 00:08:00.614 00:08:00.614 Firmware Slot Information 00:08:00.614 ========================= 00:08:00.614 Active slot: 1 00:08:00.614 Slot 1 Firmware Revision: 1.0 00:08:00.614 00:08:00.614 00:08:00.614 Commands Supported and Effects 00:08:00.614 ============================== 00:08:00.614 Admin Commands 00:08:00.614 -------------- 00:08:00.614 Delete I/O Submission Queue (00h): Supported 00:08:00.614 Create I/O Submission Queue (01h): Supported 00:08:00.614 Get Log Page (02h): Supported 00:08:00.614 Delete I/O Completion Queue (04h): Supported 00:08:00.614 Create I/O Completion Queue (05h): Supported 00:08:00.614 Identify (06h): Supported 00:08:00.614 Abort (08h): Supported 00:08:00.614 Set Features (09h): Supported 00:08:00.614 Get Features (0Ah): Supported 00:08:00.614 Asynchronous Event Request (0Ch): Supported 00:08:00.614 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.614 Directive Send (19h): Supported 00:08:00.614 Directive Receive (1Ah): Supported 00:08:00.614 Virtualization Management (1Ch): Supported 00:08:00.614 Doorbell Buffer Config (7Ch): Supported 00:08:00.614 Format NVM (80h): Supported LBA-Change 00:08:00.614 I/O Commands 00:08:00.614 ------------ 00:08:00.614 Flush (00h): Supported LBA-Change 00:08:00.614 Write (01h): Supported LBA-Change 00:08:00.614 Read (02h): Supported 00:08:00.614 Compare (05h): Supported 00:08:00.614 Write Zeroes (08h): Supported LBA-Change 00:08:00.614 Dataset Management (09h): Supported LBA-Change 00:08:00.614 Unknown (0Ch): Supported 00:08:00.614 Unknown (12h): Supported 00:08:00.614 Copy (19h): Supported LBA-Change 00:08:00.614 Unknown (1Dh): Supported LBA-Change 00:08:00.614 00:08:00.614 Error Log 00:08:00.614 ========= 00:08:00.614 00:08:00.614 Arbitration 00:08:00.614 =========== 00:08:00.614 Arbitration Burst: no limit 00:08:00.614 00:08:00.614 Power Management 00:08:00.614 ================ 00:08:00.614 Number of Power States: 1 00:08:00.614 Current Power State: Power State #0 00:08:00.614 Power State #0: 00:08:00.614 Max Power: 25.00 W 00:08:00.614 Non-Operational State: Operational 00:08:00.614 Entry Latency: 16 microseconds 00:08:00.614 Exit Latency: 4 microseconds 00:08:00.614 Relative Read Throughput: 0 00:08:00.614 Relative Read Latency: 0 00:08:00.614 Relative Write Throughput: 0 00:08:00.614 Relative Write Latency: 0 00:08:00.877 Idle Power: Not Reported 00:08:00.877 Active Power: Not Reported 00:08:00.877 Non-Operational Permissive Mode: Not Supported 00:08:00.877 00:08:00.877 Health Information 00:08:00.877 ================== 00:08:00.877 Critical Warnings: 00:08:00.877 Available Spare Space: OK 00:08:00.877 Temperature: OK 00:08:00.877 Device Reliability: OK 00:08:00.877 Read Only: No 00:08:00.877 Volatile Memory Backup: OK 00:08:00.877 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.877 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.877 Available Spare: 0% 00:08:00.877 Available Spare Threshold: 0% 00:08:00.877 Life Percentage Used: 0% 00:08:00.877 Data Units Read: 636 00:08:00.877 Data Units Written: 565 00:08:00.877 Host Read Commands: 33911 00:08:00.877 Host Write Commands: 33697 00:08:00.877 Controller Busy Time: 0 minutes 00:08:00.877 Power Cycles: 0 00:08:00.877 Power On Hours: 0 hours 00:08:00.877 Unsafe Shutdowns: 0 00:08:00.877 Unrecoverable Media Errors: 0 00:08:00.877 Lifetime Error Log Entries: 0 00:08:00.877 Warning Temperature Time: 0 minutes 00:08:00.877 Critical Temperature Time: 0 minutes 00:08:00.877 00:08:00.877 Number of Queues 00:08:00.877 ================ 00:08:00.877 Number of I/O Submission Queues: 64 00:08:00.877 Number of I/O Completion Queues: 64 00:08:00.877 00:08:00.877 ZNS Specific Controller Data 00:08:00.877 ============================ 00:08:00.877 Zone Append Size Limit: 0 00:08:00.877 00:08:00.877 00:08:00.877 Active Namespaces 00:08:00.877 ================= 00:08:00.877 Namespace ID:1 00:08:00.877 Error Recovery Timeout: Unlimited 00:08:00.877 Command Set Identifier: NVM (00h) 00:08:00.877 Deallocate: Supported 00:08:00.877 Deallocated/Unwritten Error: Supported 00:08:00.877 Deallocated Read Value: All 0x00 00:08:00.877 Deallocate in Write Zeroes: Not Supported 00:08:00.877 Deallocated Guard Field: 0xFFFF 00:08:00.877 Flush: Supported 00:08:00.877 Reservation: Not Supported 00:08:00.877 Metadata Transferred as: Separate Metadata Buffer 00:08:00.877 Namespace Sharing Capabilities: Private 00:08:00.877 Size (in LBAs): 1548666 (5GiB) 00:08:00.877 Capacity (in LBAs): 1548666 (5GiB) 00:08:00.877 Utilization (in LBAs): 1548666 (5GiB) 00:08:00.877 Thin Provisioning: Not Supported 00:08:00.877 Per-NS Atomic Units: No 00:08:00.877 Maximum Single Source Range Length: 128 00:08:00.877 Maximum Copy Length: 128 00:08:00.877 Maximum Source Range Count: 128 00:08:00.877 NGUID/EUI64 Never Reused: No 00:08:00.877 Namespace Write Protected: No 00:08:00.877 Number of LBA Formats: 8 00:08:00.877 Current LBA Format: LBA Format #07 00:08:00.877 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.877 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.877 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.877 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.877 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.877 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.877 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.877 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.877 00:08:00.877 NVM Specific Namespace Data 00:08:00.877 =========================== 00:08:00.877 Logical Block Storage Tag Mask: 0 00:08:00.877 Protection Information Capabilities: 00:08:00.877 16b Guard Protection Information Storage Tag Support: No 00:08:00.877 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.877 Storage Tag Check Read Support: No 00:08:00.877 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.877 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.877 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.877 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.877 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.877 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.877 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.877 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.877 04:13:46 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:00.877 04:13:46 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:00.877 ===================================================== 00:08:00.877 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.877 ===================================================== 00:08:00.877 Controller Capabilities/Features 00:08:00.877 ================================ 00:08:00.877 Vendor ID: 1b36 00:08:00.877 Subsystem Vendor ID: 1af4 00:08:00.877 Serial Number: 12341 00:08:00.877 Model Number: QEMU NVMe Ctrl 00:08:00.877 Firmware Version: 8.0.0 00:08:00.877 Recommended Arb Burst: 6 00:08:00.877 IEEE OUI Identifier: 00 54 52 00:08:00.877 Multi-path I/O 00:08:00.877 May have multiple subsystem ports: No 00:08:00.877 May have multiple controllers: No 00:08:00.877 Associated with SR-IOV VF: No 00:08:00.877 Max Data Transfer Size: 524288 00:08:00.877 Max Number of Namespaces: 256 00:08:00.877 Max Number of I/O Queues: 64 00:08:00.877 NVMe Specification Version (VS): 1.4 00:08:00.877 NVMe Specification Version (Identify): 1.4 00:08:00.877 Maximum Queue Entries: 2048 00:08:00.877 Contiguous Queues Required: Yes 00:08:00.877 Arbitration Mechanisms Supported 00:08:00.877 Weighted Round Robin: Not Supported 00:08:00.877 Vendor Specific: Not Supported 00:08:00.877 Reset Timeout: 7500 ms 00:08:00.877 Doorbell Stride: 4 bytes 00:08:00.877 NVM Subsystem Reset: Not Supported 00:08:00.877 Command Sets Supported 00:08:00.877 NVM Command Set: Supported 00:08:00.877 Boot Partition: Not Supported 00:08:00.877 Memory Page Size Minimum: 4096 bytes 00:08:00.877 Memory Page Size Maximum: 65536 bytes 00:08:00.877 Persistent Memory Region: Not Supported 00:08:00.877 Optional Asynchronous Events Supported 00:08:00.877 Namespace Attribute Notices: Supported 00:08:00.877 Firmware Activation Notices: Not Supported 00:08:00.877 ANA Change Notices: Not Supported 00:08:00.877 PLE Aggregate Log Change Notices: Not Supported 00:08:00.877 LBA Status Info Alert Notices: Not Supported 00:08:00.877 EGE Aggregate Log Change Notices: Not Supported 00:08:00.877 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.877 Zone Descriptor Change Notices: Not Supported 00:08:00.877 Discovery Log Change Notices: Not Supported 00:08:00.877 Controller Attributes 00:08:00.877 128-bit Host Identifier: Not Supported 00:08:00.877 Non-Operational Permissive Mode: Not Supported 00:08:00.878 NVM Sets: Not Supported 00:08:00.878 Read Recovery Levels: Not Supported 00:08:00.878 Endurance Groups: Not Supported 00:08:00.878 Predictable Latency Mode: Not Supported 00:08:00.878 Traffic Based Keep ALive: Not Supported 00:08:00.878 Namespace Granularity: Not Supported 00:08:00.878 SQ Associations: Not Supported 00:08:00.878 UUID List: Not Supported 00:08:00.878 Multi-Domain Subsystem: Not Supported 00:08:00.878 Fixed Capacity Management: Not Supported 00:08:00.878 Variable Capacity Management: Not Supported 00:08:00.878 Delete Endurance Group: Not Supported 00:08:00.878 Delete NVM Set: Not Supported 00:08:00.878 Extended LBA Formats Supported: Supported 00:08:00.878 Flexible Data Placement Supported: Not Supported 00:08:00.878 00:08:00.878 Controller Memory Buffer Support 00:08:00.878 ================================ 00:08:00.878 Supported: No 00:08:00.878 00:08:00.878 Persistent Memory Region Support 00:08:00.878 ================================ 00:08:00.878 Supported: No 00:08:00.878 00:08:00.878 Admin Command Set Attributes 00:08:00.878 ============================ 00:08:00.878 Security Send/Receive: Not Supported 00:08:00.878 Format NVM: Supported 00:08:00.878 Firmware Activate/Download: Not Supported 00:08:00.878 Namespace Management: Supported 00:08:00.878 Device Self-Test: Not Supported 00:08:00.878 Directives: Supported 00:08:00.878 NVMe-MI: Not Supported 00:08:00.878 Virtualization Management: Not Supported 00:08:00.878 Doorbell Buffer Config: Supported 00:08:00.878 Get LBA Status Capability: Not Supported 00:08:00.878 Command & Feature Lockdown Capability: Not Supported 00:08:00.878 Abort Command Limit: 4 00:08:00.878 Async Event Request Limit: 4 00:08:00.878 Number of Firmware Slots: N/A 00:08:00.878 Firmware Slot 1 Read-Only: N/A 00:08:00.878 Firmware Activation Without Reset: N/A 00:08:00.878 Multiple Update Detection Support: N/A 00:08:00.878 Firmware Update Granularity: No Information Provided 00:08:00.878 Per-Namespace SMART Log: Yes 00:08:00.878 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.878 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:00.878 Command Effects Log Page: Supported 00:08:00.878 Get Log Page Extended Data: Supported 00:08:00.878 Telemetry Log Pages: Not Supported 00:08:00.878 Persistent Event Log Pages: Not Supported 00:08:00.878 Supported Log Pages Log Page: May Support 00:08:00.878 Commands Supported & Effects Log Page: Not Supported 00:08:00.878 Feature Identifiers & Effects Log Page:May Support 00:08:00.878 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.878 Data Area 4 for Telemetry Log: Not Supported 00:08:00.878 Error Log Page Entries Supported: 1 00:08:00.878 Keep Alive: Not Supported 00:08:00.878 00:08:00.878 NVM Command Set Attributes 00:08:00.878 ========================== 00:08:00.878 Submission Queue Entry Size 00:08:00.878 Max: 64 00:08:00.878 Min: 64 00:08:00.878 Completion Queue Entry Size 00:08:00.878 Max: 16 00:08:00.878 Min: 16 00:08:00.878 Number of Namespaces: 256 00:08:00.878 Compare Command: Supported 00:08:00.878 Write Uncorrectable Command: Not Supported 00:08:00.878 Dataset Management Command: Supported 00:08:00.878 Write Zeroes Command: Supported 00:08:00.878 Set Features Save Field: Supported 00:08:00.878 Reservations: Not Supported 00:08:00.878 Timestamp: Supported 00:08:00.878 Copy: Supported 00:08:00.878 Volatile Write Cache: Present 00:08:00.878 Atomic Write Unit (Normal): 1 00:08:00.878 Atomic Write Unit (PFail): 1 00:08:00.878 Atomic Compare & Write Unit: 1 00:08:00.878 Fused Compare & Write: Not Supported 00:08:00.878 Scatter-Gather List 00:08:00.878 SGL Command Set: Supported 00:08:00.878 SGL Keyed: Not Supported 00:08:00.878 SGL Bit Bucket Descriptor: Not Supported 00:08:00.878 SGL Metadata Pointer: Not Supported 00:08:00.878 Oversized SGL: Not Supported 00:08:00.878 SGL Metadata Address: Not Supported 00:08:00.878 SGL Offset: Not Supported 00:08:00.878 Transport SGL Data Block: Not Supported 00:08:00.878 Replay Protected Memory Block: Not Supported 00:08:00.878 00:08:00.878 Firmware Slot Information 00:08:00.878 ========================= 00:08:00.878 Active slot: 1 00:08:00.878 Slot 1 Firmware Revision: 1.0 00:08:00.878 00:08:00.878 00:08:00.878 Commands Supported and Effects 00:08:00.878 ============================== 00:08:00.878 Admin Commands 00:08:00.878 -------------- 00:08:00.878 Delete I/O Submission Queue (00h): Supported 00:08:00.878 Create I/O Submission Queue (01h): Supported 00:08:00.878 Get Log Page (02h): Supported 00:08:00.878 Delete I/O Completion Queue (04h): Supported 00:08:00.878 Create I/O Completion Queue (05h): Supported 00:08:00.878 Identify (06h): Supported 00:08:00.878 Abort (08h): Supported 00:08:00.878 Set Features (09h): Supported 00:08:00.878 Get Features (0Ah): Supported 00:08:00.878 Asynchronous Event Request (0Ch): Supported 00:08:00.878 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.878 Directive Send (19h): Supported 00:08:00.878 Directive Receive (1Ah): Supported 00:08:00.878 Virtualization Management (1Ch): Supported 00:08:00.878 Doorbell Buffer Config (7Ch): Supported 00:08:00.878 Format NVM (80h): Supported LBA-Change 00:08:00.878 I/O Commands 00:08:00.878 ------------ 00:08:00.878 Flush (00h): Supported LBA-Change 00:08:00.878 Write (01h): Supported LBA-Change 00:08:00.878 Read (02h): Supported 00:08:00.878 Compare (05h): Supported 00:08:00.878 Write Zeroes (08h): Supported LBA-Change 00:08:00.878 Dataset Management (09h): Supported LBA-Change 00:08:00.878 Unknown (0Ch): Supported 00:08:00.878 Unknown (12h): Supported 00:08:00.878 Copy (19h): Supported LBA-Change 00:08:00.878 Unknown (1Dh): Supported LBA-Change 00:08:00.878 00:08:00.878 Error Log 00:08:00.878 ========= 00:08:00.878 00:08:00.878 Arbitration 00:08:00.878 =========== 00:08:00.878 Arbitration Burst: no limit 00:08:00.878 00:08:00.878 Power Management 00:08:00.878 ================ 00:08:00.878 Number of Power States: 1 00:08:00.878 Current Power State: Power State #0 00:08:00.878 Power State #0: 00:08:00.878 Max Power: 25.00 W 00:08:00.878 Non-Operational State: Operational 00:08:00.878 Entry Latency: 16 microseconds 00:08:00.878 Exit Latency: 4 microseconds 00:08:00.878 Relative Read Throughput: 0 00:08:00.878 Relative Read Latency: 0 00:08:00.878 Relative Write Throughput: 0 00:08:00.878 Relative Write Latency: 0 00:08:00.878 Idle Power: Not Reported 00:08:00.878 Active Power: Not Reported 00:08:00.878 Non-Operational Permissive Mode: Not Supported 00:08:00.878 00:08:00.878 Health Information 00:08:00.878 ================== 00:08:00.878 Critical Warnings: 00:08:00.878 Available Spare Space: OK 00:08:00.878 Temperature: OK 00:08:00.878 Device Reliability: OK 00:08:00.878 Read Only: No 00:08:00.878 Volatile Memory Backup: OK 00:08:00.878 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.878 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.878 Available Spare: 0% 00:08:00.878 Available Spare Threshold: 0% 00:08:00.878 Life Percentage Used: 0% 00:08:00.878 Data Units Read: 962 00:08:00.878 Data Units Written: 822 00:08:00.878 Host Read Commands: 51175 00:08:00.878 Host Write Commands: 49871 00:08:00.878 Controller Busy Time: 0 minutes 00:08:00.878 Power Cycles: 0 00:08:00.878 Power On Hours: 0 hours 00:08:00.878 Unsafe Shutdowns: 0 00:08:00.878 Unrecoverable Media Errors: 0 00:08:00.878 Lifetime Error Log Entries: 0 00:08:00.878 Warning Temperature Time: 0 minutes 00:08:00.878 Critical Temperature Time: 0 minutes 00:08:00.878 00:08:00.878 Number of Queues 00:08:00.878 ================ 00:08:00.878 Number of I/O Submission Queues: 64 00:08:00.878 Number of I/O Completion Queues: 64 00:08:00.878 00:08:00.878 ZNS Specific Controller Data 00:08:00.878 ============================ 00:08:00.878 Zone Append Size Limit: 0 00:08:00.878 00:08:00.878 00:08:00.878 Active Namespaces 00:08:00.878 ================= 00:08:00.878 Namespace ID:1 00:08:00.878 Error Recovery Timeout: Unlimited 00:08:00.878 Command Set Identifier: NVM (00h) 00:08:00.878 Deallocate: Supported 00:08:00.878 Deallocated/Unwritten Error: Supported 00:08:00.878 Deallocated Read Value: All 0x00 00:08:00.879 Deallocate in Write Zeroes: Not Supported 00:08:00.879 Deallocated Guard Field: 0xFFFF 00:08:00.879 Flush: Supported 00:08:00.879 Reservation: Not Supported 00:08:00.879 Namespace Sharing Capabilities: Private 00:08:00.879 Size (in LBAs): 1310720 (5GiB) 00:08:00.879 Capacity (in LBAs): 1310720 (5GiB) 00:08:00.879 Utilization (in LBAs): 1310720 (5GiB) 00:08:00.879 Thin Provisioning: Not Supported 00:08:00.879 Per-NS Atomic Units: No 00:08:00.879 Maximum Single Source Range Length: 128 00:08:00.879 Maximum Copy Length: 128 00:08:00.879 Maximum Source Range Count: 128 00:08:00.879 NGUID/EUI64 Never Reused: No 00:08:00.879 Namespace Write Protected: No 00:08:00.879 Number of LBA Formats: 8 00:08:00.879 Current LBA Format: LBA Format #04 00:08:00.879 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.879 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.879 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.879 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.879 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.879 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.879 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.879 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.879 00:08:00.879 NVM Specific Namespace Data 00:08:00.879 =========================== 00:08:00.879 Logical Block Storage Tag Mask: 0 00:08:00.879 Protection Information Capabilities: 00:08:00.879 16b Guard Protection Information Storage Tag Support: No 00:08:00.879 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.879 Storage Tag Check Read Support: No 00:08:00.879 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.879 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.879 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.879 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.879 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.879 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.879 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.879 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.879 04:13:46 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:00.879 04:13:46 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:01.141 ===================================================== 00:08:01.141 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:01.141 ===================================================== 00:08:01.141 Controller Capabilities/Features 00:08:01.141 ================================ 00:08:01.141 Vendor ID: 1b36 00:08:01.141 Subsystem Vendor ID: 1af4 00:08:01.141 Serial Number: 12342 00:08:01.141 Model Number: QEMU NVMe Ctrl 00:08:01.141 Firmware Version: 8.0.0 00:08:01.141 Recommended Arb Burst: 6 00:08:01.141 IEEE OUI Identifier: 00 54 52 00:08:01.141 Multi-path I/O 00:08:01.141 May have multiple subsystem ports: No 00:08:01.141 May have multiple controllers: No 00:08:01.141 Associated with SR-IOV VF: No 00:08:01.141 Max Data Transfer Size: 524288 00:08:01.141 Max Number of Namespaces: 256 00:08:01.141 Max Number of I/O Queues: 64 00:08:01.141 NVMe Specification Version (VS): 1.4 00:08:01.141 NVMe Specification Version (Identify): 1.4 00:08:01.141 Maximum Queue Entries: 2048 00:08:01.141 Contiguous Queues Required: Yes 00:08:01.141 Arbitration Mechanisms Supported 00:08:01.141 Weighted Round Robin: Not Supported 00:08:01.141 Vendor Specific: Not Supported 00:08:01.141 Reset Timeout: 7500 ms 00:08:01.141 Doorbell Stride: 4 bytes 00:08:01.141 NVM Subsystem Reset: Not Supported 00:08:01.141 Command Sets Supported 00:08:01.141 NVM Command Set: Supported 00:08:01.141 Boot Partition: Not Supported 00:08:01.141 Memory Page Size Minimum: 4096 bytes 00:08:01.141 Memory Page Size Maximum: 65536 bytes 00:08:01.141 Persistent Memory Region: Not Supported 00:08:01.141 Optional Asynchronous Events Supported 00:08:01.141 Namespace Attribute Notices: Supported 00:08:01.141 Firmware Activation Notices: Not Supported 00:08:01.141 ANA Change Notices: Not Supported 00:08:01.141 PLE Aggregate Log Change Notices: Not Supported 00:08:01.141 LBA Status Info Alert Notices: Not Supported 00:08:01.141 EGE Aggregate Log Change Notices: Not Supported 00:08:01.141 Normal NVM Subsystem Shutdown event: Not Supported 00:08:01.141 Zone Descriptor Change Notices: Not Supported 00:08:01.141 Discovery Log Change Notices: Not Supported 00:08:01.141 Controller Attributes 00:08:01.141 128-bit Host Identifier: Not Supported 00:08:01.141 Non-Operational Permissive Mode: Not Supported 00:08:01.141 NVM Sets: Not Supported 00:08:01.141 Read Recovery Levels: Not Supported 00:08:01.141 Endurance Groups: Not Supported 00:08:01.141 Predictable Latency Mode: Not Supported 00:08:01.142 Traffic Based Keep ALive: Not Supported 00:08:01.142 Namespace Granularity: Not Supported 00:08:01.142 SQ Associations: Not Supported 00:08:01.142 UUID List: Not Supported 00:08:01.142 Multi-Domain Subsystem: Not Supported 00:08:01.142 Fixed Capacity Management: Not Supported 00:08:01.142 Variable Capacity Management: Not Supported 00:08:01.142 Delete Endurance Group: Not Supported 00:08:01.142 Delete NVM Set: Not Supported 00:08:01.142 Extended LBA Formats Supported: Supported 00:08:01.142 Flexible Data Placement Supported: Not Supported 00:08:01.142 00:08:01.142 Controller Memory Buffer Support 00:08:01.142 ================================ 00:08:01.142 Supported: No 00:08:01.142 00:08:01.142 Persistent Memory Region Support 00:08:01.142 ================================ 00:08:01.142 Supported: No 00:08:01.142 00:08:01.142 Admin Command Set Attributes 00:08:01.142 ============================ 00:08:01.142 Security Send/Receive: Not Supported 00:08:01.142 Format NVM: Supported 00:08:01.142 Firmware Activate/Download: Not Supported 00:08:01.142 Namespace Management: Supported 00:08:01.142 Device Self-Test: Not Supported 00:08:01.142 Directives: Supported 00:08:01.142 NVMe-MI: Not Supported 00:08:01.142 Virtualization Management: Not Supported 00:08:01.142 Doorbell Buffer Config: Supported 00:08:01.142 Get LBA Status Capability: Not Supported 00:08:01.142 Command & Feature Lockdown Capability: Not Supported 00:08:01.142 Abort Command Limit: 4 00:08:01.142 Async Event Request Limit: 4 00:08:01.142 Number of Firmware Slots: N/A 00:08:01.142 Firmware Slot 1 Read-Only: N/A 00:08:01.142 Firmware Activation Without Reset: N/A 00:08:01.142 Multiple Update Detection Support: N/A 00:08:01.142 Firmware Update Granularity: No Information Provided 00:08:01.142 Per-Namespace SMART Log: Yes 00:08:01.142 Asymmetric Namespace Access Log Page: Not Supported 00:08:01.142 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:01.142 Command Effects Log Page: Supported 00:08:01.142 Get Log Page Extended Data: Supported 00:08:01.142 Telemetry Log Pages: Not Supported 00:08:01.142 Persistent Event Log Pages: Not Supported 00:08:01.142 Supported Log Pages Log Page: May Support 00:08:01.142 Commands Supported & Effects Log Page: Not Supported 00:08:01.142 Feature Identifiers & Effects Log Page:May Support 00:08:01.142 NVMe-MI Commands & Effects Log Page: May Support 00:08:01.142 Data Area 4 for Telemetry Log: Not Supported 00:08:01.142 Error Log Page Entries Supported: 1 00:08:01.142 Keep Alive: Not Supported 00:08:01.142 00:08:01.142 NVM Command Set Attributes 00:08:01.142 ========================== 00:08:01.142 Submission Queue Entry Size 00:08:01.142 Max: 64 00:08:01.142 Min: 64 00:08:01.142 Completion Queue Entry Size 00:08:01.142 Max: 16 00:08:01.142 Min: 16 00:08:01.142 Number of Namespaces: 256 00:08:01.142 Compare Command: Supported 00:08:01.142 Write Uncorrectable Command: Not Supported 00:08:01.142 Dataset Management Command: Supported 00:08:01.142 Write Zeroes Command: Supported 00:08:01.142 Set Features Save Field: Supported 00:08:01.142 Reservations: Not Supported 00:08:01.142 Timestamp: Supported 00:08:01.142 Copy: Supported 00:08:01.142 Volatile Write Cache: Present 00:08:01.142 Atomic Write Unit (Normal): 1 00:08:01.142 Atomic Write Unit (PFail): 1 00:08:01.142 Atomic Compare & Write Unit: 1 00:08:01.142 Fused Compare & Write: Not Supported 00:08:01.142 Scatter-Gather List 00:08:01.142 SGL Command Set: Supported 00:08:01.142 SGL Keyed: Not Supported 00:08:01.142 SGL Bit Bucket Descriptor: Not Supported 00:08:01.142 SGL Metadata Pointer: Not Supported 00:08:01.142 Oversized SGL: Not Supported 00:08:01.142 SGL Metadata Address: Not Supported 00:08:01.142 SGL Offset: Not Supported 00:08:01.142 Transport SGL Data Block: Not Supported 00:08:01.142 Replay Protected Memory Block: Not Supported 00:08:01.142 00:08:01.142 Firmware Slot Information 00:08:01.142 ========================= 00:08:01.142 Active slot: 1 00:08:01.142 Slot 1 Firmware Revision: 1.0 00:08:01.142 00:08:01.142 00:08:01.142 Commands Supported and Effects 00:08:01.142 ============================== 00:08:01.142 Admin Commands 00:08:01.142 -------------- 00:08:01.142 Delete I/O Submission Queue (00h): Supported 00:08:01.142 Create I/O Submission Queue (01h): Supported 00:08:01.142 Get Log Page (02h): Supported 00:08:01.142 Delete I/O Completion Queue (04h): Supported 00:08:01.142 Create I/O Completion Queue (05h): Supported 00:08:01.142 Identify (06h): Supported 00:08:01.142 Abort (08h): Supported 00:08:01.142 Set Features (09h): Supported 00:08:01.142 Get Features (0Ah): Supported 00:08:01.142 Asynchronous Event Request (0Ch): Supported 00:08:01.142 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:01.142 Directive Send (19h): Supported 00:08:01.142 Directive Receive (1Ah): Supported 00:08:01.142 Virtualization Management (1Ch): Supported 00:08:01.142 Doorbell Buffer Config (7Ch): Supported 00:08:01.142 Format NVM (80h): Supported LBA-Change 00:08:01.142 I/O Commands 00:08:01.142 ------------ 00:08:01.142 Flush (00h): Supported LBA-Change 00:08:01.142 Write (01h): Supported LBA-Change 00:08:01.142 Read (02h): Supported 00:08:01.142 Compare (05h): Supported 00:08:01.142 Write Zeroes (08h): Supported LBA-Change 00:08:01.142 Dataset Management (09h): Supported LBA-Change 00:08:01.142 Unknown (0Ch): Supported 00:08:01.142 Unknown (12h): Supported 00:08:01.142 Copy (19h): Supported LBA-Change 00:08:01.142 Unknown (1Dh): Supported LBA-Change 00:08:01.142 00:08:01.142 Error Log 00:08:01.142 ========= 00:08:01.142 00:08:01.142 Arbitration 00:08:01.142 =========== 00:08:01.142 Arbitration Burst: no limit 00:08:01.142 00:08:01.142 Power Management 00:08:01.142 ================ 00:08:01.142 Number of Power States: 1 00:08:01.142 Current Power State: Power State #0 00:08:01.142 Power State #0: 00:08:01.142 Max Power: 25.00 W 00:08:01.142 Non-Operational State: Operational 00:08:01.142 Entry Latency: 16 microseconds 00:08:01.142 Exit Latency: 4 microseconds 00:08:01.142 Relative Read Throughput: 0 00:08:01.142 Relative Read Latency: 0 00:08:01.142 Relative Write Throughput: 0 00:08:01.142 Relative Write Latency: 0 00:08:01.142 Idle Power: Not Reported 00:08:01.142 Active Power: Not Reported 00:08:01.142 Non-Operational Permissive Mode: Not Supported 00:08:01.142 00:08:01.142 Health Information 00:08:01.142 ================== 00:08:01.142 Critical Warnings: 00:08:01.142 Available Spare Space: OK 00:08:01.142 Temperature: OK 00:08:01.142 Device Reliability: OK 00:08:01.142 Read Only: No 00:08:01.142 Volatile Memory Backup: OK 00:08:01.142 Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.142 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:01.142 Available Spare: 0% 00:08:01.142 Available Spare Threshold: 0% 00:08:01.142 Life Percentage Used: 0% 00:08:01.142 Data Units Read: 2064 00:08:01.142 Data Units Written: 1851 00:08:01.142 Host Read Commands: 104446 00:08:01.142 Host Write Commands: 102715 00:08:01.142 Controller Busy Time: 0 minutes 00:08:01.142 Power Cycles: 0 00:08:01.142 Power On Hours: 0 hours 00:08:01.142 Unsafe Shutdowns: 0 00:08:01.142 Unrecoverable Media Errors: 0 00:08:01.142 Lifetime Error Log Entries: 0 00:08:01.142 Warning Temperature Time: 0 minutes 00:08:01.142 Critical Temperature Time: 0 minutes 00:08:01.142 00:08:01.142 Number of Queues 00:08:01.142 ================ 00:08:01.142 Number of I/O Submission Queues: 64 00:08:01.142 Number of I/O Completion Queues: 64 00:08:01.142 00:08:01.142 ZNS Specific Controller Data 00:08:01.142 ============================ 00:08:01.142 Zone Append Size Limit: 0 00:08:01.142 00:08:01.142 00:08:01.142 Active Namespaces 00:08:01.142 ================= 00:08:01.142 Namespace ID:1 00:08:01.142 Error Recovery Timeout: Unlimited 00:08:01.142 Command Set Identifier: NVM (00h) 00:08:01.142 Deallocate: Supported 00:08:01.142 Deallocated/Unwritten Error: Supported 00:08:01.142 Deallocated Read Value: All 0x00 00:08:01.142 Deallocate in Write Zeroes: Not Supported 00:08:01.142 Deallocated Guard Field: 0xFFFF 00:08:01.142 Flush: Supported 00:08:01.142 Reservation: Not Supported 00:08:01.142 Namespace Sharing Capabilities: Private 00:08:01.142 Size (in LBAs): 1048576 (4GiB) 00:08:01.142 Capacity (in LBAs): 1048576 (4GiB) 00:08:01.142 Utilization (in LBAs): 1048576 (4GiB) 00:08:01.142 Thin Provisioning: Not Supported 00:08:01.142 Per-NS Atomic Units: No 00:08:01.143 Maximum Single Source Range Length: 128 00:08:01.143 Maximum Copy Length: 128 00:08:01.143 Maximum Source Range Count: 128 00:08:01.143 NGUID/EUI64 Never Reused: No 00:08:01.143 Namespace Write Protected: No 00:08:01.143 Number of LBA Formats: 8 00:08:01.143 Current LBA Format: LBA Format #04 00:08:01.143 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.143 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.143 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.143 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.143 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.143 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.143 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.143 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.143 00:08:01.143 NVM Specific Namespace Data 00:08:01.143 =========================== 00:08:01.143 Logical Block Storage Tag Mask: 0 00:08:01.143 Protection Information Capabilities: 00:08:01.143 16b Guard Protection Information Storage Tag Support: No 00:08:01.143 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.143 Storage Tag Check Read Support: No 00:08:01.143 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Namespace ID:2 00:08:01.143 Error Recovery Timeout: Unlimited 00:08:01.143 Command Set Identifier: NVM (00h) 00:08:01.143 Deallocate: Supported 00:08:01.143 Deallocated/Unwritten Error: Supported 00:08:01.143 Deallocated Read Value: All 0x00 00:08:01.143 Deallocate in Write Zeroes: Not Supported 00:08:01.143 Deallocated Guard Field: 0xFFFF 00:08:01.143 Flush: Supported 00:08:01.143 Reservation: Not Supported 00:08:01.143 Namespace Sharing Capabilities: Private 00:08:01.143 Size (in LBAs): 1048576 (4GiB) 00:08:01.143 Capacity (in LBAs): 1048576 (4GiB) 00:08:01.143 Utilization (in LBAs): 1048576 (4GiB) 00:08:01.143 Thin Provisioning: Not Supported 00:08:01.143 Per-NS Atomic Units: No 00:08:01.143 Maximum Single Source Range Length: 128 00:08:01.143 Maximum Copy Length: 128 00:08:01.143 Maximum Source Range Count: 128 00:08:01.143 NGUID/EUI64 Never Reused: No 00:08:01.143 Namespace Write Protected: No 00:08:01.143 Number of LBA Formats: 8 00:08:01.143 Current LBA Format: LBA Format #04 00:08:01.143 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.143 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.143 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.143 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.143 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.143 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.143 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.143 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.143 00:08:01.143 NVM Specific Namespace Data 00:08:01.143 =========================== 00:08:01.143 Logical Block Storage Tag Mask: 0 00:08:01.143 Protection Information Capabilities: 00:08:01.143 16b Guard Protection Information Storage Tag Support: No 00:08:01.143 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.143 Storage Tag Check Read Support: No 00:08:01.143 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Namespace ID:3 00:08:01.143 Error Recovery Timeout: Unlimited 00:08:01.143 Command Set Identifier: NVM (00h) 00:08:01.143 Deallocate: Supported 00:08:01.143 Deallocated/Unwritten Error: Supported 00:08:01.143 Deallocated Read Value: All 0x00 00:08:01.143 Deallocate in Write Zeroes: Not Supported 00:08:01.143 Deallocated Guard Field: 0xFFFF 00:08:01.143 Flush: Supported 00:08:01.143 Reservation: Not Supported 00:08:01.143 Namespace Sharing Capabilities: Private 00:08:01.143 Size (in LBAs): 1048576 (4GiB) 00:08:01.143 Capacity (in LBAs): 1048576 (4GiB) 00:08:01.143 Utilization (in LBAs): 1048576 (4GiB) 00:08:01.143 Thin Provisioning: Not Supported 00:08:01.143 Per-NS Atomic Units: No 00:08:01.143 Maximum Single Source Range Length: 128 00:08:01.143 Maximum Copy Length: 128 00:08:01.143 Maximum Source Range Count: 128 00:08:01.143 NGUID/EUI64 Never Reused: No 00:08:01.143 Namespace Write Protected: No 00:08:01.143 Number of LBA Formats: 8 00:08:01.143 Current LBA Format: LBA Format #04 00:08:01.143 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.143 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.143 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.143 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.143 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.143 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.143 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.143 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.143 00:08:01.143 NVM Specific Namespace Data 00:08:01.143 =========================== 00:08:01.143 Logical Block Storage Tag Mask: 0 00:08:01.143 Protection Information Capabilities: 00:08:01.143 16b Guard Protection Information Storage Tag Support: No 00:08:01.143 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.143 Storage Tag Check Read Support: No 00:08:01.143 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.143 04:13:46 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:01.143 04:13:46 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:01.405 ===================================================== 00:08:01.405 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:01.405 ===================================================== 00:08:01.405 Controller Capabilities/Features 00:08:01.405 ================================ 00:08:01.405 Vendor ID: 1b36 00:08:01.405 Subsystem Vendor ID: 1af4 00:08:01.405 Serial Number: 12343 00:08:01.405 Model Number: QEMU NVMe Ctrl 00:08:01.405 Firmware Version: 8.0.0 00:08:01.405 Recommended Arb Burst: 6 00:08:01.405 IEEE OUI Identifier: 00 54 52 00:08:01.405 Multi-path I/O 00:08:01.405 May have multiple subsystem ports: No 00:08:01.405 May have multiple controllers: Yes 00:08:01.405 Associated with SR-IOV VF: No 00:08:01.405 Max Data Transfer Size: 524288 00:08:01.405 Max Number of Namespaces: 256 00:08:01.405 Max Number of I/O Queues: 64 00:08:01.405 NVMe Specification Version (VS): 1.4 00:08:01.405 NVMe Specification Version (Identify): 1.4 00:08:01.405 Maximum Queue Entries: 2048 00:08:01.405 Contiguous Queues Required: Yes 00:08:01.405 Arbitration Mechanisms Supported 00:08:01.405 Weighted Round Robin: Not Supported 00:08:01.405 Vendor Specific: Not Supported 00:08:01.405 Reset Timeout: 7500 ms 00:08:01.405 Doorbell Stride: 4 bytes 00:08:01.405 NVM Subsystem Reset: Not Supported 00:08:01.405 Command Sets Supported 00:08:01.405 NVM Command Set: Supported 00:08:01.405 Boot Partition: Not Supported 00:08:01.405 Memory Page Size Minimum: 4096 bytes 00:08:01.405 Memory Page Size Maximum: 65536 bytes 00:08:01.405 Persistent Memory Region: Not Supported 00:08:01.405 Optional Asynchronous Events Supported 00:08:01.405 Namespace Attribute Notices: Supported 00:08:01.405 Firmware Activation Notices: Not Supported 00:08:01.405 ANA Change Notices: Not Supported 00:08:01.405 PLE Aggregate Log Change Notices: Not Supported 00:08:01.405 LBA Status Info Alert Notices: Not Supported 00:08:01.405 EGE Aggregate Log Change Notices: Not Supported 00:08:01.405 Normal NVM Subsystem Shutdown event: Not Supported 00:08:01.405 Zone Descriptor Change Notices: Not Supported 00:08:01.405 Discovery Log Change Notices: Not Supported 00:08:01.405 Controller Attributes 00:08:01.405 128-bit Host Identifier: Not Supported 00:08:01.405 Non-Operational Permissive Mode: Not Supported 00:08:01.405 NVM Sets: Not Supported 00:08:01.405 Read Recovery Levels: Not Supported 00:08:01.405 Endurance Groups: Supported 00:08:01.405 Predictable Latency Mode: Not Supported 00:08:01.405 Traffic Based Keep ALive: Not Supported 00:08:01.405 Namespace Granularity: Not Supported 00:08:01.405 SQ Associations: Not Supported 00:08:01.405 UUID List: Not Supported 00:08:01.405 Multi-Domain Subsystem: Not Supported 00:08:01.405 Fixed Capacity Management: Not Supported 00:08:01.405 Variable Capacity Management: Not Supported 00:08:01.405 Delete Endurance Group: Not Supported 00:08:01.405 Delete NVM Set: Not Supported 00:08:01.405 Extended LBA Formats Supported: Supported 00:08:01.405 Flexible Data Placement Supported: Supported 00:08:01.405 00:08:01.405 Controller Memory Buffer Support 00:08:01.405 ================================ 00:08:01.405 Supported: No 00:08:01.405 00:08:01.405 Persistent Memory Region Support 00:08:01.405 ================================ 00:08:01.405 Supported: No 00:08:01.405 00:08:01.405 Admin Command Set Attributes 00:08:01.405 ============================ 00:08:01.405 Security Send/Receive: Not Supported 00:08:01.405 Format NVM: Supported 00:08:01.405 Firmware Activate/Download: Not Supported 00:08:01.405 Namespace Management: Supported 00:08:01.405 Device Self-Test: Not Supported 00:08:01.405 Directives: Supported 00:08:01.405 NVMe-MI: Not Supported 00:08:01.405 Virtualization Management: Not Supported 00:08:01.405 Doorbell Buffer Config: Supported 00:08:01.405 Get LBA Status Capability: Not Supported 00:08:01.405 Command & Feature Lockdown Capability: Not Supported 00:08:01.405 Abort Command Limit: 4 00:08:01.405 Async Event Request Limit: 4 00:08:01.405 Number of Firmware Slots: N/A 00:08:01.405 Firmware Slot 1 Read-Only: N/A 00:08:01.405 Firmware Activation Without Reset: N/A 00:08:01.405 Multiple Update Detection Support: N/A 00:08:01.405 Firmware Update Granularity: No Information Provided 00:08:01.405 Per-Namespace SMART Log: Yes 00:08:01.405 Asymmetric Namespace Access Log Page: Not Supported 00:08:01.405 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:01.405 Command Effects Log Page: Supported 00:08:01.405 Get Log Page Extended Data: Supported 00:08:01.405 Telemetry Log Pages: Not Supported 00:08:01.405 Persistent Event Log Pages: Not Supported 00:08:01.405 Supported Log Pages Log Page: May Support 00:08:01.405 Commands Supported & Effects Log Page: Not Supported 00:08:01.405 Feature Identifiers & Effects Log Page:May Support 00:08:01.405 NVMe-MI Commands & Effects Log Page: May Support 00:08:01.405 Data Area 4 for Telemetry Log: Not Supported 00:08:01.405 Error Log Page Entries Supported: 1 00:08:01.405 Keep Alive: Not Supported 00:08:01.405 00:08:01.405 NVM Command Set Attributes 00:08:01.405 ========================== 00:08:01.405 Submission Queue Entry Size 00:08:01.405 Max: 64 00:08:01.405 Min: 64 00:08:01.405 Completion Queue Entry Size 00:08:01.405 Max: 16 00:08:01.405 Min: 16 00:08:01.405 Number of Namespaces: 256 00:08:01.405 Compare Command: Supported 00:08:01.405 Write Uncorrectable Command: Not Supported 00:08:01.405 Dataset Management Command: Supported 00:08:01.405 Write Zeroes Command: Supported 00:08:01.405 Set Features Save Field: Supported 00:08:01.405 Reservations: Not Supported 00:08:01.405 Timestamp: Supported 00:08:01.405 Copy: Supported 00:08:01.405 Volatile Write Cache: Present 00:08:01.405 Atomic Write Unit (Normal): 1 00:08:01.405 Atomic Write Unit (PFail): 1 00:08:01.405 Atomic Compare & Write Unit: 1 00:08:01.405 Fused Compare & Write: Not Supported 00:08:01.405 Scatter-Gather List 00:08:01.405 SGL Command Set: Supported 00:08:01.405 SGL Keyed: Not Supported 00:08:01.405 SGL Bit Bucket Descriptor: Not Supported 00:08:01.405 SGL Metadata Pointer: Not Supported 00:08:01.405 Oversized SGL: Not Supported 00:08:01.405 SGL Metadata Address: Not Supported 00:08:01.405 SGL Offset: Not Supported 00:08:01.405 Transport SGL Data Block: Not Supported 00:08:01.405 Replay Protected Memory Block: Not Supported 00:08:01.405 00:08:01.405 Firmware Slot Information 00:08:01.405 ========================= 00:08:01.405 Active slot: 1 00:08:01.405 Slot 1 Firmware Revision: 1.0 00:08:01.405 00:08:01.405 00:08:01.405 Commands Supported and Effects 00:08:01.405 ============================== 00:08:01.405 Admin Commands 00:08:01.405 -------------- 00:08:01.405 Delete I/O Submission Queue (00h): Supported 00:08:01.405 Create I/O Submission Queue (01h): Supported 00:08:01.405 Get Log Page (02h): Supported 00:08:01.405 Delete I/O Completion Queue (04h): Supported 00:08:01.405 Create I/O Completion Queue (05h): Supported 00:08:01.405 Identify (06h): Supported 00:08:01.405 Abort (08h): Supported 00:08:01.405 Set Features (09h): Supported 00:08:01.405 Get Features (0Ah): Supported 00:08:01.405 Asynchronous Event Request (0Ch): Supported 00:08:01.405 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:01.405 Directive Send (19h): Supported 00:08:01.405 Directive Receive (1Ah): Supported 00:08:01.405 Virtualization Management (1Ch): Supported 00:08:01.405 Doorbell Buffer Config (7Ch): Supported 00:08:01.405 Format NVM (80h): Supported LBA-Change 00:08:01.405 I/O Commands 00:08:01.405 ------------ 00:08:01.405 Flush (00h): Supported LBA-Change 00:08:01.405 Write (01h): Supported LBA-Change 00:08:01.405 Read (02h): Supported 00:08:01.405 Compare (05h): Supported 00:08:01.406 Write Zeroes (08h): Supported LBA-Change 00:08:01.406 Dataset Management (09h): Supported LBA-Change 00:08:01.406 Unknown (0Ch): Supported 00:08:01.406 Unknown (12h): Supported 00:08:01.406 Copy (19h): Supported LBA-Change 00:08:01.406 Unknown (1Dh): Supported LBA-Change 00:08:01.406 00:08:01.406 Error Log 00:08:01.406 ========= 00:08:01.406 00:08:01.406 Arbitration 00:08:01.406 =========== 00:08:01.406 Arbitration Burst: no limit 00:08:01.406 00:08:01.406 Power Management 00:08:01.406 ================ 00:08:01.406 Number of Power States: 1 00:08:01.406 Current Power State: Power State #0 00:08:01.406 Power State #0: 00:08:01.406 Max Power: 25.00 W 00:08:01.406 Non-Operational State: Operational 00:08:01.406 Entry Latency: 16 microseconds 00:08:01.406 Exit Latency: 4 microseconds 00:08:01.406 Relative Read Throughput: 0 00:08:01.406 Relative Read Latency: 0 00:08:01.406 Relative Write Throughput: 0 00:08:01.406 Relative Write Latency: 0 00:08:01.406 Idle Power: Not Reported 00:08:01.406 Active Power: Not Reported 00:08:01.406 Non-Operational Permissive Mode: Not Supported 00:08:01.406 00:08:01.406 Health Information 00:08:01.406 ================== 00:08:01.406 Critical Warnings: 00:08:01.406 Available Spare Space: OK 00:08:01.406 Temperature: OK 00:08:01.406 Device Reliability: OK 00:08:01.406 Read Only: No 00:08:01.406 Volatile Memory Backup: OK 00:08:01.406 Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.406 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:01.406 Available Spare: 0% 00:08:01.406 Available Spare Threshold: 0% 00:08:01.406 Life Percentage Used: 0% 00:08:01.406 Data Units Read: 778 00:08:01.406 Data Units Written: 707 00:08:01.406 Host Read Commands: 35615 00:08:01.406 Host Write Commands: 35038 00:08:01.406 Controller Busy Time: 0 minutes 00:08:01.406 Power Cycles: 0 00:08:01.406 Power On Hours: 0 hours 00:08:01.406 Unsafe Shutdowns: 0 00:08:01.406 Unrecoverable Media Errors: 0 00:08:01.406 Lifetime Error Log Entries: 0 00:08:01.406 Warning Temperature Time: 0 minutes 00:08:01.406 Critical Temperature Time: 0 minutes 00:08:01.406 00:08:01.406 Number of Queues 00:08:01.406 ================ 00:08:01.406 Number of I/O Submission Queues: 64 00:08:01.406 Number of I/O Completion Queues: 64 00:08:01.406 00:08:01.406 ZNS Specific Controller Data 00:08:01.406 ============================ 00:08:01.406 Zone Append Size Limit: 0 00:08:01.406 00:08:01.406 00:08:01.406 Active Namespaces 00:08:01.406 ================= 00:08:01.406 Namespace ID:1 00:08:01.406 Error Recovery Timeout: Unlimited 00:08:01.406 Command Set Identifier: NVM (00h) 00:08:01.406 Deallocate: Supported 00:08:01.406 Deallocated/Unwritten Error: Supported 00:08:01.406 Deallocated Read Value: All 0x00 00:08:01.406 Deallocate in Write Zeroes: Not Supported 00:08:01.406 Deallocated Guard Field: 0xFFFF 00:08:01.406 Flush: Supported 00:08:01.406 Reservation: Not Supported 00:08:01.406 Namespace Sharing Capabilities: Multiple Controllers 00:08:01.406 Size (in LBAs): 262144 (1GiB) 00:08:01.406 Capacity (in LBAs): 262144 (1GiB) 00:08:01.406 Utilization (in LBAs): 262144 (1GiB) 00:08:01.406 Thin Provisioning: Not Supported 00:08:01.406 Per-NS Atomic Units: No 00:08:01.406 Maximum Single Source Range Length: 128 00:08:01.406 Maximum Copy Length: 128 00:08:01.406 Maximum Source Range Count: 128 00:08:01.406 NGUID/EUI64 Never Reused: No 00:08:01.406 Namespace Write Protected: No 00:08:01.406 Endurance group ID: 1 00:08:01.406 Number of LBA Formats: 8 00:08:01.406 Current LBA Format: LBA Format #04 00:08:01.406 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.406 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.406 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.406 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.406 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.406 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.406 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.406 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.406 00:08:01.406 Get Feature FDP: 00:08:01.406 ================ 00:08:01.406 Enabled: Yes 00:08:01.406 FDP configuration index: 0 00:08:01.406 00:08:01.406 FDP configurations log page 00:08:01.406 =========================== 00:08:01.406 Number of FDP configurations: 1 00:08:01.406 Version: 0 00:08:01.406 Size: 112 00:08:01.406 FDP Configuration Descriptor: 0 00:08:01.406 Descriptor Size: 96 00:08:01.406 Reclaim Group Identifier format: 2 00:08:01.406 FDP Volatile Write Cache: Not Present 00:08:01.406 FDP Configuration: Valid 00:08:01.406 Vendor Specific Size: 0 00:08:01.406 Number of Reclaim Groups: 2 00:08:01.406 Number of Recalim Unit Handles: 8 00:08:01.406 Max Placement Identifiers: 128 00:08:01.406 Number of Namespaces Suppprted: 256 00:08:01.406 Reclaim unit Nominal Size: 6000000 bytes 00:08:01.406 Estimated Reclaim Unit Time Limit: Not Reported 00:08:01.406 RUH Desc #000: RUH Type: Initially Isolated 00:08:01.406 RUH Desc #001: RUH Type: Initially Isolated 00:08:01.406 RUH Desc #002: RUH Type: Initially Isolated 00:08:01.406 RUH Desc #003: RUH Type: Initially Isolated 00:08:01.406 RUH Desc #004: RUH Type: Initially Isolated 00:08:01.406 RUH Desc #005: RUH Type: Initially Isolated 00:08:01.406 RUH Desc #006: RUH Type: Initially Isolated 00:08:01.406 RUH Desc #007: RUH Type: Initially Isolated 00:08:01.406 00:08:01.406 FDP reclaim unit handle usage log page 00:08:01.406 ====================================== 00:08:01.406 Number of Reclaim Unit Handles: 8 00:08:01.406 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:01.406 RUH Usage Desc #001: RUH Attributes: Unused 00:08:01.406 RUH Usage Desc #002: RUH Attributes: Unused 00:08:01.406 RUH Usage Desc #003: RUH Attributes: Unused 00:08:01.406 RUH Usage Desc #004: RUH Attributes: Unused 00:08:01.406 RUH Usage Desc #005: RUH Attributes: Unused 00:08:01.406 RUH Usage Desc #006: RUH Attributes: Unused 00:08:01.406 RUH Usage Desc #007: RUH Attributes: Unused 00:08:01.406 00:08:01.406 FDP statistics log page 00:08:01.406 ======================= 00:08:01.406 Host bytes with metadata written: 443260928 00:08:01.406 Media bytes with metadata written: 443326464 00:08:01.406 Media bytes erased: 0 00:08:01.406 00:08:01.406 FDP events log page 00:08:01.406 =================== 00:08:01.406 Number of FDP events: 0 00:08:01.406 00:08:01.406 NVM Specific Namespace Data 00:08:01.406 =========================== 00:08:01.406 Logical Block Storage Tag Mask: 0 00:08:01.406 Protection Information Capabilities: 00:08:01.406 16b Guard Protection Information Storage Tag Support: No 00:08:01.406 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.406 Storage Tag Check Read Support: No 00:08:01.406 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.406 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.406 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.406 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.406 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.406 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.406 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.406 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.406 00:08:01.406 real 0m1.176s 00:08:01.406 user 0m0.411s 00:08:01.406 sys 0m0.513s 00:08:01.406 ************************************ 00:08:01.406 END TEST nvme_identify 00:08:01.406 ************************************ 00:08:01.406 04:13:47 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.406 04:13:47 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 04:13:47 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:01.406 04:13:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.406 04:13:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.406 04:13:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 ************************************ 00:08:01.406 START TEST nvme_perf 00:08:01.406 ************************************ 00:08:01.406 04:13:47 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:01.406 04:13:47 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:02.805 Initializing NVMe Controllers 00:08:02.805 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:02.805 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:02.805 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:02.805 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:02.805 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:02.805 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:02.805 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:02.805 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:02.805 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:02.805 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:02.805 Initialization complete. Launching workers. 00:08:02.805 ======================================================== 00:08:02.805 Latency(us) 00:08:02.805 Device Information : IOPS MiB/s Average min max 00:08:02.805 PCIE (0000:00:13.0) NSID 1 from core 0: 7957.39 93.25 16087.47 6857.99 31517.54 00:08:02.805 PCIE (0000:00:10.0) NSID 1 from core 0: 7957.39 93.25 16070.29 5658.97 30904.95 00:08:02.805 PCIE (0000:00:11.0) NSID 1 from core 0: 7957.39 93.25 16053.62 5182.82 30502.31 00:08:02.805 PCIE (0000:00:12.0) NSID 1 from core 0: 7957.39 93.25 16035.94 4098.13 30951.34 00:08:02.805 PCIE (0000:00:12.0) NSID 2 from core 0: 7957.39 93.25 16017.61 3702.80 30533.21 00:08:02.805 PCIE (0000:00:12.0) NSID 3 from core 0: 7957.39 93.25 15999.57 3370.18 30156.92 00:08:02.805 ======================================================== 00:08:02.805 Total : 47744.33 559.50 16044.08 3370.18 31517.54 00:08:02.805 00:08:02.805 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:02.805 ================================================================================= 00:08:02.805 1.00000% : 11040.295us 00:08:02.805 10.00000% : 13208.025us 00:08:02.805 25.00000% : 14417.920us 00:08:02.805 50.00000% : 16031.114us 00:08:02.805 75.00000% : 17442.658us 00:08:02.805 90.00000% : 18854.203us 00:08:02.805 95.00000% : 20467.397us 00:08:02.805 98.00000% : 22988.012us 00:08:02.805 99.00000% : 25508.628us 00:08:02.805 99.50000% : 30449.034us 00:08:02.805 99.90000% : 31457.280us 00:08:02.805 99.99000% : 31658.929us 00:08:02.805 99.99900% : 31658.929us 00:08:02.805 99.99990% : 31658.929us 00:08:02.805 99.99999% : 31658.929us 00:08:02.805 00:08:02.805 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:02.805 ================================================================================= 00:08:02.805 1.00000% : 10939.471us 00:08:02.805 10.00000% : 13208.025us 00:08:02.805 25.00000% : 14518.745us 00:08:02.805 50.00000% : 15930.289us 00:08:02.805 75.00000% : 17442.658us 00:08:02.805 90.00000% : 18854.203us 00:08:02.805 95.00000% : 20568.222us 00:08:02.805 98.00000% : 23592.960us 00:08:02.805 99.00000% : 24802.855us 00:08:02.805 99.50000% : 30045.735us 00:08:02.805 99.90000% : 30852.332us 00:08:02.805 99.99000% : 31053.982us 00:08:02.805 99.99900% : 31053.982us 00:08:02.805 99.99990% : 31053.982us 00:08:02.805 99.99999% : 31053.982us 00:08:02.805 00:08:02.805 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:02.805 ================================================================================= 00:08:02.805 1.00000% : 10889.058us 00:08:02.805 10.00000% : 13006.375us 00:08:02.805 25.00000% : 14518.745us 00:08:02.806 50.00000% : 15930.289us 00:08:02.806 75.00000% : 17442.658us 00:08:02.806 90.00000% : 18854.203us 00:08:02.806 95.00000% : 20467.397us 00:08:02.806 98.00000% : 23693.785us 00:08:02.806 99.00000% : 24702.031us 00:08:02.806 99.50000% : 29844.086us 00:08:02.806 99.90000% : 30449.034us 00:08:02.806 99.99000% : 30650.683us 00:08:02.806 99.99900% : 30650.683us 00:08:02.806 99.99990% : 30650.683us 00:08:02.806 99.99999% : 30650.683us 00:08:02.806 00:08:02.806 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:02.806 ================================================================================= 00:08:02.806 1.00000% : 10032.049us 00:08:02.806 10.00000% : 13006.375us 00:08:02.806 25.00000% : 14619.569us 00:08:02.806 50.00000% : 15829.465us 00:08:02.806 75.00000% : 17442.658us 00:08:02.806 90.00000% : 19055.852us 00:08:02.806 95.00000% : 20769.871us 00:08:02.806 98.00000% : 22887.188us 00:08:02.806 99.00000% : 24298.732us 00:08:02.806 99.50000% : 30247.385us 00:08:02.806 99.90000% : 30852.332us 00:08:02.806 99.99000% : 31053.982us 00:08:02.806 99.99900% : 31053.982us 00:08:02.806 99.99990% : 31053.982us 00:08:02.806 99.99999% : 31053.982us 00:08:02.806 00:08:02.806 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:02.806 ================================================================================= 00:08:02.806 1.00000% : 8620.505us 00:08:02.806 10.00000% : 13107.200us 00:08:02.806 25.00000% : 14518.745us 00:08:02.806 50.00000% : 15829.465us 00:08:02.806 75.00000% : 17442.658us 00:08:02.806 90.00000% : 18955.028us 00:08:02.806 95.00000% : 20265.748us 00:08:02.806 98.00000% : 22786.363us 00:08:02.806 99.00000% : 24702.031us 00:08:02.806 99.50000% : 29844.086us 00:08:02.806 99.90000% : 30449.034us 00:08:02.806 99.99000% : 30650.683us 00:08:02.806 99.99900% : 30650.683us 00:08:02.806 99.99990% : 30650.683us 00:08:02.806 99.99999% : 30650.683us 00:08:02.806 00:08:02.806 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:02.806 ================================================================================= 00:08:02.806 1.00000% : 7965.145us 00:08:02.806 10.00000% : 13208.025us 00:08:02.806 25.00000% : 14518.745us 00:08:02.806 50.00000% : 15829.465us 00:08:02.806 75.00000% : 17543.483us 00:08:02.806 90.00000% : 18854.203us 00:08:02.806 95.00000% : 20366.572us 00:08:02.806 98.00000% : 22786.363us 00:08:02.806 99.00000% : 25609.452us 00:08:02.806 99.50000% : 29440.788us 00:08:02.806 99.90000% : 30045.735us 00:08:02.806 99.99000% : 30247.385us 00:08:02.806 99.99900% : 30247.385us 00:08:02.806 99.99990% : 30247.385us 00:08:02.806 99.99999% : 30247.385us 00:08:02.806 00:08:02.806 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:02.806 ============================================================================== 00:08:02.806 Range in us Cumulative IO count 00:08:02.806 6856.074 - 6906.486: 0.0500% ( 4) 00:08:02.806 6906.486 - 6956.898: 0.0875% ( 3) 00:08:02.806 7007.311 - 7057.723: 0.1125% ( 2) 00:08:02.806 7057.723 - 7108.135: 0.1250% ( 1) 00:08:02.806 7108.135 - 7158.548: 0.1375% ( 1) 00:08:02.806 7158.548 - 7208.960: 0.1625% ( 2) 00:08:02.806 7208.960 - 7259.372: 0.1750% ( 1) 00:08:02.806 7259.372 - 7309.785: 0.2000% ( 2) 00:08:02.806 7309.785 - 7360.197: 0.2250% ( 2) 00:08:02.806 7360.197 - 7410.609: 0.2375% ( 1) 00:08:02.806 7410.609 - 7461.022: 0.2875% ( 4) 00:08:02.806 7461.022 - 7511.434: 0.3375% ( 4) 00:08:02.806 7511.434 - 7561.846: 0.3875% ( 4) 00:08:02.806 7561.846 - 7612.258: 0.4250% ( 3) 00:08:02.806 7612.258 - 7662.671: 0.4750% ( 4) 00:08:02.806 7662.671 - 7713.083: 0.5250% ( 4) 00:08:02.806 7713.083 - 7763.495: 0.5750% ( 4) 00:08:02.806 7763.495 - 7813.908: 0.6125% ( 3) 00:08:02.806 7813.908 - 7864.320: 0.6625% ( 4) 00:08:02.806 7864.320 - 7914.732: 0.7125% ( 4) 00:08:02.806 7914.732 - 7965.145: 0.7625% ( 4) 00:08:02.806 7965.145 - 8015.557: 0.8000% ( 3) 00:08:02.806 10788.234 - 10838.646: 0.8125% ( 1) 00:08:02.806 10838.646 - 10889.058: 0.8625% ( 4) 00:08:02.806 10889.058 - 10939.471: 0.9000% ( 3) 00:08:02.806 10939.471 - 10989.883: 0.9250% ( 2) 00:08:02.806 10989.883 - 11040.295: 1.0375% ( 9) 00:08:02.806 11040.295 - 11090.708: 1.1250% ( 7) 00:08:02.806 11090.708 - 11141.120: 1.1750% ( 4) 00:08:02.806 11141.120 - 11191.532: 1.2875% ( 9) 00:08:02.806 11191.532 - 11241.945: 1.4125% ( 10) 00:08:02.806 11241.945 - 11292.357: 1.5500% ( 11) 00:08:02.806 11292.357 - 11342.769: 1.6750% ( 10) 00:08:02.806 11342.769 - 11393.182: 1.8125% ( 11) 00:08:02.806 11393.182 - 11443.594: 1.9625% ( 12) 00:08:02.806 11443.594 - 11494.006: 2.1000% ( 11) 00:08:02.806 11494.006 - 11544.418: 2.2375% ( 11) 00:08:02.806 11544.418 - 11594.831: 2.3750% ( 11) 00:08:02.806 11594.831 - 11645.243: 2.5375% ( 13) 00:08:02.806 11645.243 - 11695.655: 2.6875% ( 12) 00:08:02.806 11695.655 - 11746.068: 2.8500% ( 13) 00:08:02.806 11746.068 - 11796.480: 3.0125% ( 13) 00:08:02.806 11796.480 - 11846.892: 3.1625% ( 12) 00:08:02.806 11846.892 - 11897.305: 3.3375% ( 14) 00:08:02.806 11897.305 - 11947.717: 3.5625% ( 18) 00:08:02.806 11947.717 - 11998.129: 3.7625% ( 16) 00:08:02.806 11998.129 - 12048.542: 3.9500% ( 15) 00:08:02.806 12048.542 - 12098.954: 4.1375% ( 15) 00:08:02.806 12098.954 - 12149.366: 4.4000% ( 21) 00:08:02.806 12149.366 - 12199.778: 4.6625% ( 21) 00:08:02.806 12199.778 - 12250.191: 4.9000% ( 19) 00:08:02.806 12250.191 - 12300.603: 5.1375% ( 19) 00:08:02.806 12300.603 - 12351.015: 5.3375% ( 16) 00:08:02.806 12351.015 - 12401.428: 5.5375% ( 16) 00:08:02.806 12401.428 - 12451.840: 5.7375% ( 16) 00:08:02.806 12451.840 - 12502.252: 5.9000% ( 13) 00:08:02.806 12502.252 - 12552.665: 6.1375% ( 19) 00:08:02.806 12552.665 - 12603.077: 6.3875% ( 20) 00:08:02.806 12603.077 - 12653.489: 6.6625% ( 22) 00:08:02.806 12653.489 - 12703.902: 7.0000% ( 27) 00:08:02.806 12703.902 - 12754.314: 7.3250% ( 26) 00:08:02.806 12754.314 - 12804.726: 7.6625% ( 27) 00:08:02.806 12804.726 - 12855.138: 8.0500% ( 31) 00:08:02.806 12855.138 - 12905.551: 8.4375% ( 31) 00:08:02.806 12905.551 - 13006.375: 9.1625% ( 58) 00:08:02.806 13006.375 - 13107.200: 9.8625% ( 56) 00:08:02.806 13107.200 - 13208.025: 10.5125% ( 52) 00:08:02.806 13208.025 - 13308.849: 11.2250% ( 57) 00:08:02.806 13308.849 - 13409.674: 12.0625% ( 67) 00:08:02.806 13409.674 - 13510.498: 13.0750% ( 81) 00:08:02.806 13510.498 - 13611.323: 14.2000% ( 90) 00:08:02.806 13611.323 - 13712.148: 15.3875% ( 95) 00:08:02.806 13712.148 - 13812.972: 16.7000% ( 105) 00:08:02.806 13812.972 - 13913.797: 18.0000% ( 104) 00:08:02.806 13913.797 - 14014.622: 19.4375% ( 115) 00:08:02.806 14014.622 - 14115.446: 20.9250% ( 119) 00:08:02.806 14115.446 - 14216.271: 22.4000% ( 118) 00:08:02.806 14216.271 - 14317.095: 24.0250% ( 130) 00:08:02.806 14317.095 - 14417.920: 25.9250% ( 152) 00:08:02.806 14417.920 - 14518.745: 27.6250% ( 136) 00:08:02.806 14518.745 - 14619.569: 28.9625% ( 107) 00:08:02.806 14619.569 - 14720.394: 30.3750% ( 113) 00:08:02.806 14720.394 - 14821.218: 31.8875% ( 121) 00:08:02.806 14821.218 - 14922.043: 33.4250% ( 123) 00:08:02.806 14922.043 - 15022.868: 34.9500% ( 122) 00:08:02.806 15022.868 - 15123.692: 36.4875% ( 123) 00:08:02.806 15123.692 - 15224.517: 38.1875% ( 136) 00:08:02.806 15224.517 - 15325.342: 39.9250% ( 139) 00:08:02.806 15325.342 - 15426.166: 41.4375% ( 121) 00:08:02.806 15426.166 - 15526.991: 42.8750% ( 115) 00:08:02.806 15526.991 - 15627.815: 44.5375% ( 133) 00:08:02.806 15627.815 - 15728.640: 46.3125% ( 142) 00:08:02.806 15728.640 - 15829.465: 48.0625% ( 140) 00:08:02.806 15829.465 - 15930.289: 49.9250% ( 149) 00:08:02.806 15930.289 - 16031.114: 51.7250% ( 144) 00:08:02.806 16031.114 - 16131.938: 53.4375% ( 137) 00:08:02.806 16131.938 - 16232.763: 54.8875% ( 116) 00:08:02.806 16232.763 - 16333.588: 56.3375% ( 116) 00:08:02.806 16333.588 - 16434.412: 58.0875% ( 140) 00:08:02.806 16434.412 - 16535.237: 59.9375% ( 148) 00:08:02.806 16535.237 - 16636.062: 61.6625% ( 138) 00:08:02.806 16636.062 - 16736.886: 63.5875% ( 154) 00:08:02.806 16736.886 - 16837.711: 65.4750% ( 151) 00:08:02.806 16837.711 - 16938.535: 67.1375% ( 133) 00:08:02.806 16938.535 - 17039.360: 68.9625% ( 146) 00:08:02.806 17039.360 - 17140.185: 70.5250% ( 125) 00:08:02.806 17140.185 - 17241.009: 72.0625% ( 123) 00:08:02.806 17241.009 - 17341.834: 73.6000% ( 123) 00:08:02.806 17341.834 - 17442.658: 75.1250% ( 122) 00:08:02.806 17442.658 - 17543.483: 76.6000% ( 118) 00:08:02.806 17543.483 - 17644.308: 77.9000% ( 104) 00:08:02.806 17644.308 - 17745.132: 79.0500% ( 92) 00:08:02.806 17745.132 - 17845.957: 80.4000% ( 108) 00:08:02.806 17845.957 - 17946.782: 81.7625% ( 109) 00:08:02.806 17946.782 - 18047.606: 82.9750% ( 97) 00:08:02.806 18047.606 - 18148.431: 84.1875% ( 97) 00:08:02.806 18148.431 - 18249.255: 85.3750% ( 95) 00:08:02.806 18249.255 - 18350.080: 86.4875% ( 89) 00:08:02.806 18350.080 - 18450.905: 87.4375% ( 76) 00:08:02.806 18450.905 - 18551.729: 88.2625% ( 66) 00:08:02.806 18551.729 - 18652.554: 89.1500% ( 71) 00:08:02.806 18652.554 - 18753.378: 89.8625% ( 57) 00:08:02.806 18753.378 - 18854.203: 90.4250% ( 45) 00:08:02.806 18854.203 - 18955.028: 90.9125% ( 39) 00:08:02.806 18955.028 - 19055.852: 91.3000% ( 31) 00:08:02.806 19055.852 - 19156.677: 91.6750% ( 30) 00:08:02.806 19156.677 - 19257.502: 92.0125% ( 27) 00:08:02.807 19257.502 - 19358.326: 92.3375% ( 26) 00:08:02.807 19358.326 - 19459.151: 92.6875% ( 28) 00:08:02.807 19459.151 - 19559.975: 92.9750% ( 23) 00:08:02.807 19559.975 - 19660.800: 93.2125% ( 19) 00:08:02.807 19660.800 - 19761.625: 93.4750% ( 21) 00:08:02.807 19761.625 - 19862.449: 93.7250% ( 20) 00:08:02.807 19862.449 - 19963.274: 93.9250% ( 16) 00:08:02.807 19963.274 - 20064.098: 94.1375% ( 17) 00:08:02.807 20064.098 - 20164.923: 94.2625% ( 10) 00:08:02.807 20164.923 - 20265.748: 94.4375% ( 14) 00:08:02.807 20265.748 - 20366.572: 94.6750% ( 19) 00:08:02.807 20366.572 - 20467.397: 95.0125% ( 27) 00:08:02.807 20467.397 - 20568.222: 95.2500% ( 19) 00:08:02.807 20568.222 - 20669.046: 95.5125% ( 21) 00:08:02.807 20669.046 - 20769.871: 95.7500% ( 19) 00:08:02.807 20769.871 - 20870.695: 95.9375% ( 15) 00:08:02.807 20870.695 - 20971.520: 96.1750% ( 19) 00:08:02.807 20971.520 - 21072.345: 96.3500% ( 14) 00:08:02.807 21072.345 - 21173.169: 96.5125% ( 13) 00:08:02.807 21173.169 - 21273.994: 96.6625% ( 12) 00:08:02.807 21273.994 - 21374.818: 96.7750% ( 9) 00:08:02.807 21374.818 - 21475.643: 96.8500% ( 6) 00:08:02.807 21475.643 - 21576.468: 96.9125% ( 5) 00:08:02.807 21576.468 - 21677.292: 96.9750% ( 5) 00:08:02.807 21677.292 - 21778.117: 97.0375% ( 5) 00:08:02.807 21778.117 - 21878.942: 97.1000% ( 5) 00:08:02.807 21878.942 - 21979.766: 97.1625% ( 5) 00:08:02.807 21979.766 - 22080.591: 97.2250% ( 5) 00:08:02.807 22080.591 - 22181.415: 97.2875% ( 5) 00:08:02.807 22181.415 - 22282.240: 97.3375% ( 4) 00:08:02.807 22282.240 - 22383.065: 97.4125% ( 6) 00:08:02.807 22383.065 - 22483.889: 97.5375% ( 10) 00:08:02.807 22483.889 - 22584.714: 97.6875% ( 12) 00:08:02.807 22584.714 - 22685.538: 97.8500% ( 13) 00:08:02.807 22685.538 - 22786.363: 97.9125% ( 5) 00:08:02.807 22786.363 - 22887.188: 97.9625% ( 4) 00:08:02.807 22887.188 - 22988.012: 98.0375% ( 6) 00:08:02.807 22988.012 - 23088.837: 98.1125% ( 6) 00:08:02.807 23088.837 - 23189.662: 98.1750% ( 5) 00:08:02.807 23189.662 - 23290.486: 98.2375% ( 5) 00:08:02.807 23290.486 - 23391.311: 98.3125% ( 6) 00:08:02.807 23391.311 - 23492.135: 98.3750% ( 5) 00:08:02.807 23492.135 - 23592.960: 98.4000% ( 2) 00:08:02.807 23996.258 - 24097.083: 98.4125% ( 1) 00:08:02.807 24097.083 - 24197.908: 98.4500% ( 3) 00:08:02.807 24197.908 - 24298.732: 98.5000% ( 4) 00:08:02.807 24298.732 - 24399.557: 98.5375% ( 3) 00:08:02.807 24399.557 - 24500.382: 98.5875% ( 4) 00:08:02.807 24500.382 - 24601.206: 98.6750% ( 7) 00:08:02.807 24601.206 - 24702.031: 98.7125% ( 3) 00:08:02.807 24702.031 - 24802.855: 98.7625% ( 4) 00:08:02.807 24802.855 - 24903.680: 98.8000% ( 3) 00:08:02.807 24903.680 - 25004.505: 98.8375% ( 3) 00:08:02.807 25004.505 - 25105.329: 98.8625% ( 2) 00:08:02.807 25105.329 - 25206.154: 98.9000% ( 3) 00:08:02.807 25206.154 - 25306.978: 98.9500% ( 4) 00:08:02.807 25306.978 - 25407.803: 98.9875% ( 3) 00:08:02.807 25407.803 - 25508.628: 99.0375% ( 4) 00:08:02.807 25508.628 - 25609.452: 99.0750% ( 3) 00:08:02.807 25609.452 - 25710.277: 99.1125% ( 3) 00:08:02.807 25710.277 - 25811.102: 99.1625% ( 4) 00:08:02.807 25811.102 - 26012.751: 99.2000% ( 3) 00:08:02.807 29844.086 - 30045.735: 99.2750% ( 6) 00:08:02.807 30045.735 - 30247.385: 99.4000% ( 10) 00:08:02.807 30247.385 - 30449.034: 99.5125% ( 9) 00:08:02.807 30449.034 - 30650.683: 99.6375% ( 10) 00:08:02.807 30650.683 - 30852.332: 99.7625% ( 10) 00:08:02.807 30852.332 - 31053.982: 99.8125% ( 4) 00:08:02.807 31053.982 - 31255.631: 99.8250% ( 1) 00:08:02.807 31255.631 - 31457.280: 99.9625% ( 11) 00:08:02.807 31457.280 - 31658.929: 100.0000% ( 3) 00:08:02.807 00:08:02.807 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:02.807 ============================================================================== 00:08:02.807 Range in us Cumulative IO count 00:08:02.807 5646.178 - 5671.385: 0.0125% ( 1) 00:08:02.807 5671.385 - 5696.591: 0.0375% ( 2) 00:08:02.807 5747.003 - 5772.209: 0.0500% ( 1) 00:08:02.807 5772.209 - 5797.415: 0.0625% ( 1) 00:08:02.807 5797.415 - 5822.622: 0.0750% ( 1) 00:08:02.807 5873.034 - 5898.240: 0.0875% ( 1) 00:08:02.807 5898.240 - 5923.446: 0.1000% ( 1) 00:08:02.807 5923.446 - 5948.652: 0.1125% ( 1) 00:08:02.807 5999.065 - 6024.271: 0.1250% ( 1) 00:08:02.807 6024.271 - 6049.477: 0.1375% ( 1) 00:08:02.807 6049.477 - 6074.683: 0.1500% ( 1) 00:08:02.807 6099.889 - 6125.095: 0.1625% ( 1) 00:08:02.807 6125.095 - 6150.302: 0.1750% ( 1) 00:08:02.807 6150.302 - 6175.508: 0.1875% ( 1) 00:08:02.807 6225.920 - 6251.126: 0.2000% ( 1) 00:08:02.807 6251.126 - 6276.332: 0.2250% ( 2) 00:08:02.807 6326.745 - 6351.951: 0.2375% ( 1) 00:08:02.807 6351.951 - 6377.157: 0.2625% ( 2) 00:08:02.807 6402.363 - 6427.569: 0.3000% ( 3) 00:08:02.807 6452.775 - 6503.188: 0.3500% ( 4) 00:08:02.807 6503.188 - 6553.600: 0.3875% ( 3) 00:08:02.807 6553.600 - 6604.012: 0.4250% ( 3) 00:08:02.807 6604.012 - 6654.425: 0.4625% ( 3) 00:08:02.807 6654.425 - 6704.837: 0.5000% ( 3) 00:08:02.807 6704.837 - 6755.249: 0.5375% ( 3) 00:08:02.807 6755.249 - 6805.662: 0.5875% ( 4) 00:08:02.807 6805.662 - 6856.074: 0.6250% ( 3) 00:08:02.807 6856.074 - 6906.486: 0.6625% ( 3) 00:08:02.807 6906.486 - 6956.898: 0.7000% ( 3) 00:08:02.807 6956.898 - 7007.311: 0.7375% ( 3) 00:08:02.807 7007.311 - 7057.723: 0.7750% ( 3) 00:08:02.807 7057.723 - 7108.135: 0.8000% ( 2) 00:08:02.807 10536.172 - 10586.585: 0.8125% ( 1) 00:08:02.807 10586.585 - 10636.997: 0.8250% ( 1) 00:08:02.807 10636.997 - 10687.409: 0.8625% ( 3) 00:08:02.807 10687.409 - 10737.822: 0.8750% ( 1) 00:08:02.807 10737.822 - 10788.234: 0.9250% ( 4) 00:08:02.807 10788.234 - 10838.646: 0.9750% ( 4) 00:08:02.807 10889.058 - 10939.471: 1.0000% ( 2) 00:08:02.807 10939.471 - 10989.883: 1.0375% ( 3) 00:08:02.807 10989.883 - 11040.295: 1.0750% ( 3) 00:08:02.807 11040.295 - 11090.708: 1.1125% ( 3) 00:08:02.807 11090.708 - 11141.120: 1.1500% ( 3) 00:08:02.807 11141.120 - 11191.532: 1.2875% ( 11) 00:08:02.807 11191.532 - 11241.945: 1.3500% ( 5) 00:08:02.807 11241.945 - 11292.357: 1.4625% ( 9) 00:08:02.807 11292.357 - 11342.769: 1.5500% ( 7) 00:08:02.807 11342.769 - 11393.182: 1.7625% ( 17) 00:08:02.807 11393.182 - 11443.594: 1.9250% ( 13) 00:08:02.807 11443.594 - 11494.006: 2.0625% ( 11) 00:08:02.807 11494.006 - 11544.418: 2.1625% ( 8) 00:08:02.807 11544.418 - 11594.831: 2.3375% ( 14) 00:08:02.807 11594.831 - 11645.243: 2.5125% ( 14) 00:08:02.807 11645.243 - 11695.655: 2.6750% ( 13) 00:08:02.807 11695.655 - 11746.068: 2.7625% ( 7) 00:08:02.807 11746.068 - 11796.480: 2.8375% ( 6) 00:08:02.807 11796.480 - 11846.892: 3.0125% ( 14) 00:08:02.807 11846.892 - 11897.305: 3.2500% ( 19) 00:08:02.807 11897.305 - 11947.717: 3.3750% ( 10) 00:08:02.807 11947.717 - 11998.129: 3.5250% ( 12) 00:08:02.807 11998.129 - 12048.542: 3.6625% ( 11) 00:08:02.807 12048.542 - 12098.954: 3.8625% ( 16) 00:08:02.807 12098.954 - 12149.366: 4.1250% ( 21) 00:08:02.807 12149.366 - 12199.778: 4.2500% ( 10) 00:08:02.807 12199.778 - 12250.191: 4.4125% ( 13) 00:08:02.807 12250.191 - 12300.603: 4.6875% ( 22) 00:08:02.807 12300.603 - 12351.015: 4.9875% ( 24) 00:08:02.807 12351.015 - 12401.428: 5.1750% ( 15) 00:08:02.807 12401.428 - 12451.840: 5.3375% ( 13) 00:08:02.807 12451.840 - 12502.252: 5.5000% ( 13) 00:08:02.807 12502.252 - 12552.665: 5.8875% ( 31) 00:08:02.807 12552.665 - 12603.077: 6.1625% ( 22) 00:08:02.807 12603.077 - 12653.489: 6.4500% ( 23) 00:08:02.807 12653.489 - 12703.902: 6.8500% ( 32) 00:08:02.807 12703.902 - 12754.314: 7.0250% ( 14) 00:08:02.807 12754.314 - 12804.726: 7.3125% ( 23) 00:08:02.807 12804.726 - 12855.138: 7.6625% ( 28) 00:08:02.807 12855.138 - 12905.551: 8.1375% ( 38) 00:08:02.807 12905.551 - 13006.375: 8.8625% ( 58) 00:08:02.807 13006.375 - 13107.200: 9.7750% ( 73) 00:08:02.807 13107.200 - 13208.025: 10.7500% ( 78) 00:08:02.807 13208.025 - 13308.849: 11.5875% ( 67) 00:08:02.807 13308.849 - 13409.674: 12.6875% ( 88) 00:08:02.807 13409.674 - 13510.498: 13.9625% ( 102) 00:08:02.807 13510.498 - 13611.323: 15.0125% ( 84) 00:08:02.807 13611.323 - 13712.148: 16.0375% ( 82) 00:08:02.807 13712.148 - 13812.972: 17.2750% ( 99) 00:08:02.807 13812.972 - 13913.797: 18.2625% ( 79) 00:08:02.807 13913.797 - 14014.622: 19.4875% ( 98) 00:08:02.807 14014.622 - 14115.446: 20.6250% ( 91) 00:08:02.807 14115.446 - 14216.271: 21.8250% ( 96) 00:08:02.807 14216.271 - 14317.095: 23.2500% ( 114) 00:08:02.807 14317.095 - 14417.920: 24.6750% ( 114) 00:08:02.807 14417.920 - 14518.745: 26.0500% ( 110) 00:08:02.807 14518.745 - 14619.569: 27.6500% ( 128) 00:08:02.807 14619.569 - 14720.394: 29.1125% ( 117) 00:08:02.807 14720.394 - 14821.218: 30.5500% ( 115) 00:08:02.807 14821.218 - 14922.043: 32.2625% ( 137) 00:08:02.807 14922.043 - 15022.868: 33.8500% ( 127) 00:08:02.807 15022.868 - 15123.692: 35.8125% ( 157) 00:08:02.807 15123.692 - 15224.517: 37.8250% ( 161) 00:08:02.807 15224.517 - 15325.342: 39.5000% ( 134) 00:08:02.807 15325.342 - 15426.166: 41.3500% ( 148) 00:08:02.807 15426.166 - 15526.991: 43.2500% ( 152) 00:08:02.807 15526.991 - 15627.815: 45.1375% ( 151) 00:08:02.807 15627.815 - 15728.640: 46.7375% ( 128) 00:08:02.807 15728.640 - 15829.465: 48.2875% ( 124) 00:08:02.807 15829.465 - 15930.289: 50.2625% ( 158) 00:08:02.807 15930.289 - 16031.114: 52.1250% ( 149) 00:08:02.807 16031.114 - 16131.938: 54.0375% ( 153) 00:08:02.808 16131.938 - 16232.763: 55.8250% ( 143) 00:08:02.808 16232.763 - 16333.588: 57.2500% ( 114) 00:08:02.808 16333.588 - 16434.412: 59.0250% ( 142) 00:08:02.808 16434.412 - 16535.237: 60.6250% ( 128) 00:08:02.808 16535.237 - 16636.062: 62.2000% ( 126) 00:08:02.808 16636.062 - 16736.886: 63.9625% ( 141) 00:08:02.808 16736.886 - 16837.711: 65.7875% ( 146) 00:08:02.808 16837.711 - 16938.535: 67.3750% ( 127) 00:08:02.808 16938.535 - 17039.360: 69.3500% ( 158) 00:08:02.808 17039.360 - 17140.185: 70.9500% ( 128) 00:08:02.808 17140.185 - 17241.009: 72.5125% ( 125) 00:08:02.808 17241.009 - 17341.834: 74.6500% ( 171) 00:08:02.808 17341.834 - 17442.658: 76.4750% ( 146) 00:08:02.808 17442.658 - 17543.483: 78.2625% ( 143) 00:08:02.808 17543.483 - 17644.308: 79.8500% ( 127) 00:08:02.808 17644.308 - 17745.132: 81.3625% ( 121) 00:08:02.808 17745.132 - 17845.957: 82.6250% ( 101) 00:08:02.808 17845.957 - 17946.782: 83.4875% ( 69) 00:08:02.808 17946.782 - 18047.606: 84.5000% ( 81) 00:08:02.808 18047.606 - 18148.431: 85.3375% ( 67) 00:08:02.808 18148.431 - 18249.255: 86.0750% ( 59) 00:08:02.808 18249.255 - 18350.080: 87.1750% ( 88) 00:08:02.808 18350.080 - 18450.905: 87.7875% ( 49) 00:08:02.808 18450.905 - 18551.729: 88.3750% ( 47) 00:08:02.808 18551.729 - 18652.554: 89.4250% ( 84) 00:08:02.808 18652.554 - 18753.378: 89.9000% ( 38) 00:08:02.808 18753.378 - 18854.203: 90.4250% ( 42) 00:08:02.808 18854.203 - 18955.028: 90.7875% ( 29) 00:08:02.808 18955.028 - 19055.852: 91.2500% ( 37) 00:08:02.808 19055.852 - 19156.677: 91.7375% ( 39) 00:08:02.808 19156.677 - 19257.502: 92.0750% ( 27) 00:08:02.808 19257.502 - 19358.326: 92.4625% ( 31) 00:08:02.808 19358.326 - 19459.151: 92.8250% ( 29) 00:08:02.808 19459.151 - 19559.975: 93.1500% ( 26) 00:08:02.808 19559.975 - 19660.800: 93.3375% ( 15) 00:08:02.808 19660.800 - 19761.625: 93.5500% ( 17) 00:08:02.808 19761.625 - 19862.449: 93.8125% ( 21) 00:08:02.808 19862.449 - 19963.274: 93.9625% ( 12) 00:08:02.808 19963.274 - 20064.098: 94.0500% ( 7) 00:08:02.808 20064.098 - 20164.923: 94.3375% ( 23) 00:08:02.808 20164.923 - 20265.748: 94.4125% ( 6) 00:08:02.808 20265.748 - 20366.572: 94.5250% ( 9) 00:08:02.808 20366.572 - 20467.397: 94.6750% ( 12) 00:08:02.808 20467.397 - 20568.222: 95.0000% ( 26) 00:08:02.808 20568.222 - 20669.046: 95.2375% ( 19) 00:08:02.808 20669.046 - 20769.871: 95.4250% ( 15) 00:08:02.808 20769.871 - 20870.695: 95.6000% ( 14) 00:08:02.808 20870.695 - 20971.520: 95.8500% ( 20) 00:08:02.808 20971.520 - 21072.345: 95.9500% ( 8) 00:08:02.808 21072.345 - 21173.169: 96.1375% ( 15) 00:08:02.808 21173.169 - 21273.994: 96.2750% ( 11) 00:08:02.808 21273.994 - 21374.818: 96.4000% ( 10) 00:08:02.808 21374.818 - 21475.643: 96.5375% ( 11) 00:08:02.808 21475.643 - 21576.468: 96.6625% ( 10) 00:08:02.808 21576.468 - 21677.292: 96.7750% ( 9) 00:08:02.808 21677.292 - 21778.117: 96.8000% ( 2) 00:08:02.808 21778.117 - 21878.942: 96.8125% ( 1) 00:08:02.808 21878.942 - 21979.766: 96.9000% ( 7) 00:08:02.808 21979.766 - 22080.591: 96.9375% ( 3) 00:08:02.808 22080.591 - 22181.415: 96.9875% ( 4) 00:08:02.808 22181.415 - 22282.240: 97.0500% ( 5) 00:08:02.808 22282.240 - 22383.065: 97.1000% ( 4) 00:08:02.808 22383.065 - 22483.889: 97.1500% ( 4) 00:08:02.808 22483.889 - 22584.714: 97.2000% ( 4) 00:08:02.808 22584.714 - 22685.538: 97.2500% ( 4) 00:08:02.808 22685.538 - 22786.363: 97.3125% ( 5) 00:08:02.808 22786.363 - 22887.188: 97.3750% ( 5) 00:08:02.808 22887.188 - 22988.012: 97.4125% ( 3) 00:08:02.808 22988.012 - 23088.837: 97.5125% ( 8) 00:08:02.808 23088.837 - 23189.662: 97.5750% ( 5) 00:08:02.808 23189.662 - 23290.486: 97.6500% ( 6) 00:08:02.808 23290.486 - 23391.311: 97.7500% ( 8) 00:08:02.808 23391.311 - 23492.135: 97.9500% ( 16) 00:08:02.808 23492.135 - 23592.960: 98.0000% ( 4) 00:08:02.808 23592.960 - 23693.785: 98.0750% ( 6) 00:08:02.808 23693.785 - 23794.609: 98.1250% ( 4) 00:08:02.808 23794.609 - 23895.434: 98.2750% ( 12) 00:08:02.808 23895.434 - 23996.258: 98.3500% ( 6) 00:08:02.808 23996.258 - 24097.083: 98.4375% ( 7) 00:08:02.808 24097.083 - 24197.908: 98.5250% ( 7) 00:08:02.808 24197.908 - 24298.732: 98.6250% ( 8) 00:08:02.808 24298.732 - 24399.557: 98.7250% ( 8) 00:08:02.808 24399.557 - 24500.382: 98.8000% ( 6) 00:08:02.808 24500.382 - 24601.206: 98.9000% ( 8) 00:08:02.808 24601.206 - 24702.031: 98.9625% ( 5) 00:08:02.808 24702.031 - 24802.855: 99.0625% ( 8) 00:08:02.808 24802.855 - 24903.680: 99.0750% ( 1) 00:08:02.808 24903.680 - 25004.505: 99.1125% ( 3) 00:08:02.808 25004.505 - 25105.329: 99.1500% ( 3) 00:08:02.808 25105.329 - 25206.154: 99.1875% ( 3) 00:08:02.808 25206.154 - 25306.978: 99.2000% ( 1) 00:08:02.808 29440.788 - 29642.437: 99.3000% ( 8) 00:08:02.808 29642.437 - 29844.086: 99.4375% ( 11) 00:08:02.808 29844.086 - 30045.735: 99.5375% ( 8) 00:08:02.808 30045.735 - 30247.385: 99.6375% ( 8) 00:08:02.808 30247.385 - 30449.034: 99.7500% ( 9) 00:08:02.808 30449.034 - 30650.683: 99.8375% ( 7) 00:08:02.808 30650.683 - 30852.332: 99.9500% ( 9) 00:08:02.808 30852.332 - 31053.982: 100.0000% ( 4) 00:08:02.808 00:08:02.808 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:02.808 ============================================================================== 00:08:02.808 Range in us Cumulative IO count 00:08:02.808 5167.262 - 5192.468: 0.0625% ( 5) 00:08:02.808 5192.468 - 5217.674: 0.1000% ( 3) 00:08:02.808 5217.674 - 5242.880: 0.1125% ( 1) 00:08:02.808 5242.880 - 5268.086: 0.1375% ( 2) 00:08:02.808 5268.086 - 5293.292: 0.1500% ( 1) 00:08:02.808 5293.292 - 5318.498: 0.1750% ( 2) 00:08:02.808 5318.498 - 5343.705: 0.1875% ( 1) 00:08:02.808 5343.705 - 5368.911: 0.2125% ( 2) 00:08:02.808 5368.911 - 5394.117: 0.2375% ( 2) 00:08:02.808 5394.117 - 5419.323: 0.2500% ( 1) 00:08:02.808 5419.323 - 5444.529: 0.2750% ( 2) 00:08:02.808 5444.529 - 5469.735: 0.3000% ( 2) 00:08:02.808 5469.735 - 5494.942: 0.3250% ( 2) 00:08:02.808 5494.942 - 5520.148: 0.3500% ( 2) 00:08:02.808 5520.148 - 5545.354: 0.3750% ( 2) 00:08:02.808 5545.354 - 5570.560: 0.4000% ( 2) 00:08:02.808 5570.560 - 5595.766: 0.4250% ( 2) 00:08:02.808 5595.766 - 5620.972: 0.4500% ( 2) 00:08:02.808 5620.972 - 5646.178: 0.4750% ( 2) 00:08:02.808 5646.178 - 5671.385: 0.5000% ( 2) 00:08:02.808 5671.385 - 5696.591: 0.5125% ( 1) 00:08:02.808 5696.591 - 5721.797: 0.5375% ( 2) 00:08:02.808 5721.797 - 5747.003: 0.5625% ( 2) 00:08:02.808 5747.003 - 5772.209: 0.5875% ( 2) 00:08:02.808 5772.209 - 5797.415: 0.6125% ( 2) 00:08:02.808 5797.415 - 5822.622: 0.6375% ( 2) 00:08:02.808 5822.622 - 5847.828: 0.6625% ( 2) 00:08:02.808 5847.828 - 5873.034: 0.6875% ( 2) 00:08:02.808 5873.034 - 5898.240: 0.7000% ( 1) 00:08:02.808 5898.240 - 5923.446: 0.7250% ( 2) 00:08:02.808 5923.446 - 5948.652: 0.7375% ( 1) 00:08:02.808 5948.652 - 5973.858: 0.7625% ( 2) 00:08:02.808 5973.858 - 5999.065: 0.7875% ( 2) 00:08:02.808 5999.065 - 6024.271: 0.8000% ( 1) 00:08:02.808 10636.997 - 10687.409: 0.8250% ( 2) 00:08:02.808 10687.409 - 10737.822: 0.8750% ( 4) 00:08:02.808 10737.822 - 10788.234: 0.9125% ( 3) 00:08:02.808 10788.234 - 10838.646: 0.9625% ( 4) 00:08:02.808 10838.646 - 10889.058: 1.0000% ( 3) 00:08:02.808 10889.058 - 10939.471: 1.0375% ( 3) 00:08:02.808 10939.471 - 10989.883: 1.0750% ( 3) 00:08:02.808 10989.883 - 11040.295: 1.1125% ( 3) 00:08:02.808 11040.295 - 11090.708: 1.1500% ( 3) 00:08:02.808 11090.708 - 11141.120: 1.2000% ( 4) 00:08:02.808 11141.120 - 11191.532: 1.2375% ( 3) 00:08:02.808 11191.532 - 11241.945: 1.2625% ( 2) 00:08:02.808 11241.945 - 11292.357: 1.3000% ( 3) 00:08:02.808 11292.357 - 11342.769: 1.3625% ( 5) 00:08:02.808 11342.769 - 11393.182: 1.4875% ( 10) 00:08:02.808 11393.182 - 11443.594: 1.6375% ( 12) 00:08:02.808 11443.594 - 11494.006: 1.7750% ( 11) 00:08:02.808 11494.006 - 11544.418: 1.9250% ( 12) 00:08:02.808 11544.418 - 11594.831: 2.1750% ( 20) 00:08:02.808 11594.831 - 11645.243: 2.4000% ( 18) 00:08:02.808 11645.243 - 11695.655: 2.5500% ( 12) 00:08:02.808 11695.655 - 11746.068: 2.7625% ( 17) 00:08:02.808 11746.068 - 11796.480: 2.9250% ( 13) 00:08:02.808 11796.480 - 11846.892: 3.1125% ( 15) 00:08:02.808 11846.892 - 11897.305: 3.3125% ( 16) 00:08:02.808 11897.305 - 11947.717: 3.5250% ( 17) 00:08:02.808 11947.717 - 11998.129: 3.7500% ( 18) 00:08:02.808 11998.129 - 12048.542: 3.9875% ( 19) 00:08:02.808 12048.542 - 12098.954: 4.2750% ( 23) 00:08:02.808 12098.954 - 12149.366: 4.5500% ( 22) 00:08:02.808 12149.366 - 12199.778: 4.8625% ( 25) 00:08:02.808 12199.778 - 12250.191: 5.1250% ( 21) 00:08:02.808 12250.191 - 12300.603: 5.4375% ( 25) 00:08:02.808 12300.603 - 12351.015: 5.7125% ( 22) 00:08:02.808 12351.015 - 12401.428: 5.9750% ( 21) 00:08:02.808 12401.428 - 12451.840: 6.2000% ( 18) 00:08:02.808 12451.840 - 12502.252: 6.4375% ( 19) 00:08:02.808 12502.252 - 12552.665: 6.5750% ( 11) 00:08:02.808 12552.665 - 12603.077: 6.7250% ( 12) 00:08:02.808 12603.077 - 12653.489: 6.8875% ( 13) 00:08:02.808 12653.489 - 12703.902: 7.2250% ( 27) 00:08:02.808 12703.902 - 12754.314: 7.8375% ( 49) 00:08:02.808 12754.314 - 12804.726: 8.2875% ( 36) 00:08:02.808 12804.726 - 12855.138: 8.7250% ( 35) 00:08:02.808 12855.138 - 12905.551: 9.3125% ( 47) 00:08:02.808 12905.551 - 13006.375: 10.2500% ( 75) 00:08:02.808 13006.375 - 13107.200: 11.1875% ( 75) 00:08:02.808 13107.200 - 13208.025: 12.1000% ( 73) 00:08:02.809 13208.025 - 13308.849: 12.9250% ( 66) 00:08:02.809 13308.849 - 13409.674: 13.7625% ( 67) 00:08:02.809 13409.674 - 13510.498: 14.6000% ( 67) 00:08:02.809 13510.498 - 13611.323: 15.5000% ( 72) 00:08:02.809 13611.323 - 13712.148: 16.5000% ( 80) 00:08:02.809 13712.148 - 13812.972: 17.3875% ( 71) 00:08:02.809 13812.972 - 13913.797: 18.3125% ( 74) 00:08:02.809 13913.797 - 14014.622: 19.2875% ( 78) 00:08:02.809 14014.622 - 14115.446: 20.2875% ( 80) 00:08:02.809 14115.446 - 14216.271: 21.2875% ( 80) 00:08:02.809 14216.271 - 14317.095: 22.4250% ( 91) 00:08:02.809 14317.095 - 14417.920: 23.7250% ( 104) 00:08:02.809 14417.920 - 14518.745: 25.1000% ( 110) 00:08:02.809 14518.745 - 14619.569: 26.5000% ( 112) 00:08:02.809 14619.569 - 14720.394: 28.0250% ( 122) 00:08:02.809 14720.394 - 14821.218: 29.5875% ( 125) 00:08:02.809 14821.218 - 14922.043: 31.2500% ( 133) 00:08:02.809 14922.043 - 15022.868: 33.3250% ( 166) 00:08:02.809 15022.868 - 15123.692: 35.2625% ( 155) 00:08:02.809 15123.692 - 15224.517: 37.3875% ( 170) 00:08:02.809 15224.517 - 15325.342: 39.7500% ( 189) 00:08:02.809 15325.342 - 15426.166: 41.6500% ( 152) 00:08:02.809 15426.166 - 15526.991: 43.6500% ( 160) 00:08:02.809 15526.991 - 15627.815: 45.5000% ( 148) 00:08:02.809 15627.815 - 15728.640: 47.6000% ( 168) 00:08:02.809 15728.640 - 15829.465: 49.5000% ( 152) 00:08:02.809 15829.465 - 15930.289: 51.0750% ( 126) 00:08:02.809 15930.289 - 16031.114: 52.7250% ( 132) 00:08:02.809 16031.114 - 16131.938: 54.3625% ( 131) 00:08:02.809 16131.938 - 16232.763: 56.0375% ( 134) 00:08:02.809 16232.763 - 16333.588: 57.6625% ( 130) 00:08:02.809 16333.588 - 16434.412: 59.3375% ( 134) 00:08:02.809 16434.412 - 16535.237: 61.1000% ( 141) 00:08:02.809 16535.237 - 16636.062: 62.9500% ( 148) 00:08:02.809 16636.062 - 16736.886: 64.5250% ( 126) 00:08:02.809 16736.886 - 16837.711: 66.1750% ( 132) 00:08:02.809 16837.711 - 16938.535: 67.8625% ( 135) 00:08:02.809 16938.535 - 17039.360: 69.4625% ( 128) 00:08:02.809 17039.360 - 17140.185: 71.0500% ( 127) 00:08:02.809 17140.185 - 17241.009: 72.5250% ( 118) 00:08:02.809 17241.009 - 17341.834: 74.1625% ( 131) 00:08:02.809 17341.834 - 17442.658: 75.9500% ( 143) 00:08:02.809 17442.658 - 17543.483: 77.5375% ( 127) 00:08:02.809 17543.483 - 17644.308: 79.1125% ( 126) 00:08:02.809 17644.308 - 17745.132: 80.3000% ( 95) 00:08:02.809 17745.132 - 17845.957: 81.4875% ( 95) 00:08:02.809 17845.957 - 17946.782: 82.6000% ( 89) 00:08:02.809 17946.782 - 18047.606: 83.7625% ( 93) 00:08:02.809 18047.606 - 18148.431: 84.9250% ( 93) 00:08:02.809 18148.431 - 18249.255: 86.0625% ( 91) 00:08:02.809 18249.255 - 18350.080: 86.9500% ( 71) 00:08:02.809 18350.080 - 18450.905: 87.7875% ( 67) 00:08:02.809 18450.905 - 18551.729: 88.5625% ( 62) 00:08:02.809 18551.729 - 18652.554: 89.2250% ( 53) 00:08:02.809 18652.554 - 18753.378: 89.8125% ( 47) 00:08:02.809 18753.378 - 18854.203: 90.3625% ( 44) 00:08:02.809 18854.203 - 18955.028: 90.8250% ( 37) 00:08:02.809 18955.028 - 19055.852: 91.2250% ( 32) 00:08:02.809 19055.852 - 19156.677: 91.5625% ( 27) 00:08:02.809 19156.677 - 19257.502: 91.9500% ( 31) 00:08:02.809 19257.502 - 19358.326: 92.2500% ( 24) 00:08:02.809 19358.326 - 19459.151: 92.6625% ( 33) 00:08:02.809 19459.151 - 19559.975: 93.0875% ( 34) 00:08:02.809 19559.975 - 19660.800: 93.4625% ( 30) 00:08:02.809 19660.800 - 19761.625: 93.8625% ( 32) 00:08:02.809 19761.625 - 19862.449: 94.1375% ( 22) 00:08:02.809 19862.449 - 19963.274: 94.3250% ( 15) 00:08:02.809 19963.274 - 20064.098: 94.4750% ( 12) 00:08:02.809 20064.098 - 20164.923: 94.6375% ( 13) 00:08:02.809 20164.923 - 20265.748: 94.8000% ( 13) 00:08:02.809 20265.748 - 20366.572: 94.9500% ( 12) 00:08:02.809 20366.572 - 20467.397: 95.1250% ( 14) 00:08:02.809 20467.397 - 20568.222: 95.2750% ( 12) 00:08:02.809 20568.222 - 20669.046: 95.4625% ( 15) 00:08:02.809 20669.046 - 20769.871: 95.6250% ( 13) 00:08:02.809 20769.871 - 20870.695: 95.7625% ( 11) 00:08:02.809 20870.695 - 20971.520: 95.8875% ( 10) 00:08:02.809 20971.520 - 21072.345: 95.9750% ( 7) 00:08:02.809 21072.345 - 21173.169: 96.0000% ( 2) 00:08:02.809 21374.818 - 21475.643: 96.0250% ( 2) 00:08:02.809 21475.643 - 21576.468: 96.1000% ( 6) 00:08:02.809 21576.468 - 21677.292: 96.1750% ( 6) 00:08:02.809 21677.292 - 21778.117: 96.2875% ( 9) 00:08:02.809 21778.117 - 21878.942: 96.4500% ( 13) 00:08:02.809 21878.942 - 21979.766: 96.5875% ( 11) 00:08:02.809 21979.766 - 22080.591: 96.7250% ( 11) 00:08:02.809 22080.591 - 22181.415: 96.8625% ( 11) 00:08:02.809 22181.415 - 22282.240: 96.9875% ( 10) 00:08:02.809 22282.240 - 22383.065: 97.1125% ( 10) 00:08:02.809 22383.065 - 22483.889: 97.2375% ( 10) 00:08:02.809 22483.889 - 22584.714: 97.2875% ( 4) 00:08:02.809 22584.714 - 22685.538: 97.3625% ( 6) 00:08:02.809 22685.538 - 22786.363: 97.4250% ( 5) 00:08:02.809 22786.363 - 22887.188: 97.4875% ( 5) 00:08:02.809 22887.188 - 22988.012: 97.5500% ( 5) 00:08:02.809 22988.012 - 23088.837: 97.6125% ( 5) 00:08:02.809 23088.837 - 23189.662: 97.6625% ( 4) 00:08:02.809 23189.662 - 23290.486: 97.7375% ( 6) 00:08:02.809 23290.486 - 23391.311: 97.8125% ( 6) 00:08:02.809 23391.311 - 23492.135: 97.9000% ( 7) 00:08:02.809 23492.135 - 23592.960: 97.9750% ( 6) 00:08:02.809 23592.960 - 23693.785: 98.0875% ( 9) 00:08:02.809 23693.785 - 23794.609: 98.1500% ( 5) 00:08:02.809 23794.609 - 23895.434: 98.2250% ( 6) 00:08:02.809 23895.434 - 23996.258: 98.3125% ( 7) 00:08:02.809 23996.258 - 24097.083: 98.4125% ( 8) 00:08:02.809 24097.083 - 24197.908: 98.4875% ( 6) 00:08:02.809 24197.908 - 24298.732: 98.6000% ( 9) 00:08:02.809 24298.732 - 24399.557: 98.7125% ( 9) 00:08:02.809 24399.557 - 24500.382: 98.8125% ( 8) 00:08:02.809 24500.382 - 24601.206: 98.9125% ( 8) 00:08:02.809 24601.206 - 24702.031: 99.0125% ( 8) 00:08:02.809 24702.031 - 24802.855: 99.1250% ( 9) 00:08:02.809 24802.855 - 24903.680: 99.2000% ( 6) 00:08:02.809 29037.489 - 29239.138: 99.2125% ( 1) 00:08:02.809 29239.138 - 29440.788: 99.3250% ( 9) 00:08:02.809 29440.788 - 29642.437: 99.4375% ( 9) 00:08:02.809 29642.437 - 29844.086: 99.5625% ( 10) 00:08:02.809 29844.086 - 30045.735: 99.6875% ( 10) 00:08:02.809 30045.735 - 30247.385: 99.8250% ( 11) 00:08:02.809 30247.385 - 30449.034: 99.9625% ( 11) 00:08:02.809 30449.034 - 30650.683: 100.0000% ( 3) 00:08:02.809 00:08:02.809 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:02.809 ============================================================================== 00:08:02.809 Range in us Cumulative IO count 00:08:02.809 4083.397 - 4108.603: 0.0250% ( 2) 00:08:02.809 4108.603 - 4133.809: 0.0375% ( 1) 00:08:02.809 4133.809 - 4159.015: 0.0625% ( 2) 00:08:02.809 4159.015 - 4184.222: 0.0875% ( 2) 00:08:02.809 4234.634 - 4259.840: 0.1000% ( 1) 00:08:02.809 4259.840 - 4285.046: 0.1125% ( 1) 00:08:02.809 4285.046 - 4310.252: 0.1250% ( 1) 00:08:02.809 4360.665 - 4385.871: 0.1500% ( 2) 00:08:02.809 4385.871 - 4411.077: 0.1750% ( 2) 00:08:02.809 4486.695 - 4511.902: 0.2000% ( 2) 00:08:02.809 4511.902 - 4537.108: 0.2125% ( 1) 00:08:02.809 4537.108 - 4562.314: 0.2375% ( 2) 00:08:02.809 4637.932 - 4663.138: 0.2500% ( 1) 00:08:02.809 4663.138 - 4688.345: 0.2750% ( 2) 00:08:02.809 4688.345 - 4713.551: 0.3000% ( 2) 00:08:02.809 4713.551 - 4738.757: 0.3250% ( 2) 00:08:02.809 4738.757 - 4763.963: 0.3500% ( 2) 00:08:02.809 4763.963 - 4789.169: 0.3750% ( 2) 00:08:02.809 4789.169 - 4814.375: 0.3875% ( 1) 00:08:02.809 4814.375 - 4839.582: 0.4125% ( 2) 00:08:02.809 4839.582 - 4864.788: 0.4375% ( 2) 00:08:02.809 4864.788 - 4889.994: 0.4625% ( 2) 00:08:02.809 4889.994 - 4915.200: 0.4875% ( 2) 00:08:02.809 4915.200 - 4940.406: 0.5125% ( 2) 00:08:02.809 4940.406 - 4965.612: 0.5375% ( 2) 00:08:02.809 4965.612 - 4990.818: 0.5625% ( 2) 00:08:02.809 4990.818 - 5016.025: 0.5750% ( 1) 00:08:02.809 5016.025 - 5041.231: 0.6000% ( 2) 00:08:02.809 5041.231 - 5066.437: 0.6250% ( 2) 00:08:02.809 5066.437 - 5091.643: 0.6500% ( 2) 00:08:02.809 5091.643 - 5116.849: 0.6750% ( 2) 00:08:02.809 5116.849 - 5142.055: 0.7000% ( 2) 00:08:02.809 5142.055 - 5167.262: 0.7125% ( 1) 00:08:02.809 5167.262 - 5192.468: 0.7250% ( 1) 00:08:02.809 5192.468 - 5217.674: 0.7500% ( 2) 00:08:02.809 5217.674 - 5242.880: 0.7750% ( 2) 00:08:02.809 5242.880 - 5268.086: 0.8000% ( 2) 00:08:02.809 9779.988 - 9830.400: 0.8250% ( 2) 00:08:02.809 9830.400 - 9880.812: 0.8625% ( 3) 00:08:02.809 9880.812 - 9931.225: 0.9250% ( 5) 00:08:02.809 9931.225 - 9981.637: 0.9625% ( 3) 00:08:02.809 9981.637 - 10032.049: 1.0125% ( 4) 00:08:02.809 10032.049 - 10082.462: 1.0500% ( 3) 00:08:02.809 10082.462 - 10132.874: 1.1000% ( 4) 00:08:02.809 10132.874 - 10183.286: 1.1500% ( 4) 00:08:02.809 10183.286 - 10233.698: 1.2000% ( 4) 00:08:02.809 10233.698 - 10284.111: 1.2375% ( 3) 00:08:02.809 10284.111 - 10334.523: 1.2875% ( 4) 00:08:02.809 10334.523 - 10384.935: 1.3250% ( 3) 00:08:02.809 10384.935 - 10435.348: 1.3750% ( 4) 00:08:02.809 10435.348 - 10485.760: 1.4250% ( 4) 00:08:02.809 10485.760 - 10536.172: 1.4625% ( 3) 00:08:02.809 10536.172 - 10586.585: 1.5125% ( 4) 00:08:02.809 10586.585 - 10636.997: 1.5625% ( 4) 00:08:02.809 10636.997 - 10687.409: 1.6000% ( 3) 00:08:02.809 10788.234 - 10838.646: 1.6375% ( 3) 00:08:02.809 10838.646 - 10889.058: 1.6750% ( 3) 00:08:02.809 10889.058 - 10939.471: 1.7000% ( 2) 00:08:02.809 10939.471 - 10989.883: 1.7500% ( 4) 00:08:02.810 10989.883 - 11040.295: 1.8250% ( 6) 00:08:02.810 11040.295 - 11090.708: 1.9500% ( 10) 00:08:02.810 11090.708 - 11141.120: 2.0500% ( 8) 00:08:02.810 11141.120 - 11191.532: 2.1875% ( 11) 00:08:02.810 11191.532 - 11241.945: 2.3000% ( 9) 00:08:02.810 11241.945 - 11292.357: 2.4375% ( 11) 00:08:02.810 11292.357 - 11342.769: 2.5625% ( 10) 00:08:02.810 11342.769 - 11393.182: 2.7000% ( 11) 00:08:02.810 11393.182 - 11443.594: 2.8375% ( 11) 00:08:02.810 11443.594 - 11494.006: 2.9250% ( 7) 00:08:02.810 11494.006 - 11544.418: 3.0500% ( 10) 00:08:02.810 11544.418 - 11594.831: 3.1875% ( 11) 00:08:02.810 11594.831 - 11645.243: 3.3000% ( 9) 00:08:02.810 11645.243 - 11695.655: 3.4000% ( 8) 00:08:02.810 11695.655 - 11746.068: 3.5000% ( 8) 00:08:02.810 11746.068 - 11796.480: 3.7500% ( 20) 00:08:02.810 11796.480 - 11846.892: 4.0125% ( 21) 00:08:02.810 11846.892 - 11897.305: 4.3750% ( 29) 00:08:02.810 11897.305 - 11947.717: 4.6250% ( 20) 00:08:02.810 11947.717 - 11998.129: 4.8500% ( 18) 00:08:02.810 11998.129 - 12048.542: 5.1500% ( 24) 00:08:02.810 12048.542 - 12098.954: 5.4625% ( 25) 00:08:02.810 12098.954 - 12149.366: 5.7875% ( 26) 00:08:02.810 12149.366 - 12199.778: 6.1000% ( 25) 00:08:02.810 12199.778 - 12250.191: 6.3125% ( 17) 00:08:02.810 12250.191 - 12300.603: 6.5125% ( 16) 00:08:02.810 12300.603 - 12351.015: 6.7375% ( 18) 00:08:02.810 12351.015 - 12401.428: 6.9625% ( 18) 00:08:02.810 12401.428 - 12451.840: 7.1875% ( 18) 00:08:02.810 12451.840 - 12502.252: 7.4250% ( 19) 00:08:02.810 12502.252 - 12552.665: 7.6750% ( 20) 00:08:02.810 12552.665 - 12603.077: 7.8875% ( 17) 00:08:02.810 12603.077 - 12653.489: 8.1750% ( 23) 00:08:02.810 12653.489 - 12703.902: 8.4625% ( 23) 00:08:02.810 12703.902 - 12754.314: 8.7625% ( 24) 00:08:02.810 12754.314 - 12804.726: 9.0375% ( 22) 00:08:02.810 12804.726 - 12855.138: 9.3250% ( 23) 00:08:02.810 12855.138 - 12905.551: 9.6125% ( 23) 00:08:02.810 12905.551 - 13006.375: 10.1875% ( 46) 00:08:02.810 13006.375 - 13107.200: 10.5500% ( 29) 00:08:02.810 13107.200 - 13208.025: 11.0500% ( 40) 00:08:02.810 13208.025 - 13308.849: 11.5625% ( 41) 00:08:02.810 13308.849 - 13409.674: 12.2000% ( 51) 00:08:02.810 13409.674 - 13510.498: 12.8125% ( 49) 00:08:02.810 13510.498 - 13611.323: 13.4375% ( 50) 00:08:02.810 13611.323 - 13712.148: 14.2125% ( 62) 00:08:02.810 13712.148 - 13812.972: 14.9625% ( 60) 00:08:02.810 13812.972 - 13913.797: 15.8625% ( 72) 00:08:02.810 13913.797 - 14014.622: 17.1000% ( 99) 00:08:02.810 14014.622 - 14115.446: 18.4375% ( 107) 00:08:02.810 14115.446 - 14216.271: 19.7750% ( 107) 00:08:02.810 14216.271 - 14317.095: 21.1875% ( 113) 00:08:02.810 14317.095 - 14417.920: 22.6125% ( 114) 00:08:02.810 14417.920 - 14518.745: 24.1500% ( 123) 00:08:02.810 14518.745 - 14619.569: 25.7875% ( 131) 00:08:02.810 14619.569 - 14720.394: 27.3500% ( 125) 00:08:02.810 14720.394 - 14821.218: 29.0375% ( 135) 00:08:02.810 14821.218 - 14922.043: 30.9625% ( 154) 00:08:02.810 14922.043 - 15022.868: 32.8375% ( 150) 00:08:02.810 15022.868 - 15123.692: 35.3250% ( 199) 00:08:02.810 15123.692 - 15224.517: 37.9125% ( 207) 00:08:02.810 15224.517 - 15325.342: 40.2750% ( 189) 00:08:02.810 15325.342 - 15426.166: 42.6000% ( 186) 00:08:02.810 15426.166 - 15526.991: 45.0000% ( 192) 00:08:02.810 15526.991 - 15627.815: 47.1375% ( 171) 00:08:02.810 15627.815 - 15728.640: 49.1625% ( 162) 00:08:02.810 15728.640 - 15829.465: 51.0875% ( 154) 00:08:02.810 15829.465 - 15930.289: 52.9750% ( 151) 00:08:02.810 15930.289 - 16031.114: 54.6875% ( 137) 00:08:02.810 16031.114 - 16131.938: 56.3500% ( 133) 00:08:02.810 16131.938 - 16232.763: 57.8625% ( 121) 00:08:02.810 16232.763 - 16333.588: 59.1250% ( 101) 00:08:02.810 16333.588 - 16434.412: 60.3250% ( 96) 00:08:02.810 16434.412 - 16535.237: 61.6125% ( 103) 00:08:02.810 16535.237 - 16636.062: 63.1750% ( 125) 00:08:02.810 16636.062 - 16736.886: 65.0375% ( 149) 00:08:02.810 16736.886 - 16837.711: 66.9500% ( 153) 00:08:02.810 16837.711 - 16938.535: 68.5500% ( 128) 00:08:02.810 16938.535 - 17039.360: 70.0875% ( 123) 00:08:02.810 17039.360 - 17140.185: 71.5750% ( 119) 00:08:02.810 17140.185 - 17241.009: 73.1000% ( 122) 00:08:02.810 17241.009 - 17341.834: 74.4750% ( 110) 00:08:02.810 17341.834 - 17442.658: 75.8625% ( 111) 00:08:02.810 17442.658 - 17543.483: 77.2375% ( 110) 00:08:02.810 17543.483 - 17644.308: 78.3375% ( 88) 00:08:02.810 17644.308 - 17745.132: 79.5250% ( 95) 00:08:02.810 17745.132 - 17845.957: 80.7250% ( 96) 00:08:02.810 17845.957 - 17946.782: 81.8500% ( 90) 00:08:02.810 17946.782 - 18047.606: 82.8000% ( 76) 00:08:02.810 18047.606 - 18148.431: 83.7500% ( 76) 00:08:02.810 18148.431 - 18249.255: 84.5875% ( 67) 00:08:02.810 18249.255 - 18350.080: 85.3750% ( 63) 00:08:02.810 18350.080 - 18450.905: 86.0125% ( 51) 00:08:02.810 18450.905 - 18551.729: 86.7000% ( 55) 00:08:02.810 18551.729 - 18652.554: 87.4250% ( 58) 00:08:02.810 18652.554 - 18753.378: 88.1750% ( 60) 00:08:02.810 18753.378 - 18854.203: 88.9375% ( 61) 00:08:02.810 18854.203 - 18955.028: 89.5625% ( 50) 00:08:02.810 18955.028 - 19055.852: 90.2250% ( 53) 00:08:02.810 19055.852 - 19156.677: 90.8000% ( 46) 00:08:02.810 19156.677 - 19257.502: 91.3875% ( 47) 00:08:02.810 19257.502 - 19358.326: 91.8500% ( 37) 00:08:02.810 19358.326 - 19459.151: 92.3000% ( 36) 00:08:02.810 19459.151 - 19559.975: 92.7125% ( 33) 00:08:02.810 19559.975 - 19660.800: 93.0125% ( 24) 00:08:02.810 19660.800 - 19761.625: 93.2750% ( 21) 00:08:02.810 19761.625 - 19862.449: 93.4375% ( 13) 00:08:02.810 19862.449 - 19963.274: 93.6000% ( 13) 00:08:02.810 19963.274 - 20064.098: 93.7500% ( 12) 00:08:02.810 20064.098 - 20164.923: 93.9500% ( 16) 00:08:02.810 20164.923 - 20265.748: 94.1375% ( 15) 00:08:02.810 20265.748 - 20366.572: 94.2625% ( 10) 00:08:02.810 20366.572 - 20467.397: 94.4625% ( 16) 00:08:02.810 20467.397 - 20568.222: 94.6500% ( 15) 00:08:02.810 20568.222 - 20669.046: 94.9875% ( 27) 00:08:02.810 20669.046 - 20769.871: 95.2500% ( 21) 00:08:02.810 20769.871 - 20870.695: 95.5125% ( 21) 00:08:02.810 20870.695 - 20971.520: 95.7875% ( 22) 00:08:02.810 20971.520 - 21072.345: 96.0250% ( 19) 00:08:02.810 21072.345 - 21173.169: 96.2750% ( 20) 00:08:02.810 21173.169 - 21273.994: 96.5375% ( 21) 00:08:02.810 21273.994 - 21374.818: 96.7250% ( 15) 00:08:02.810 21374.818 - 21475.643: 96.9125% ( 15) 00:08:02.810 21475.643 - 21576.468: 97.0625% ( 12) 00:08:02.810 21576.468 - 21677.292: 97.2000% ( 11) 00:08:02.810 21677.292 - 21778.117: 97.3500% ( 12) 00:08:02.810 21778.117 - 21878.942: 97.4375% ( 7) 00:08:02.810 21878.942 - 21979.766: 97.4875% ( 4) 00:08:02.810 21979.766 - 22080.591: 97.5375% ( 4) 00:08:02.810 22080.591 - 22181.415: 97.5625% ( 2) 00:08:02.810 22181.415 - 22282.240: 97.6375% ( 6) 00:08:02.810 22282.240 - 22383.065: 97.7375% ( 8) 00:08:02.810 22383.065 - 22483.889: 97.8000% ( 5) 00:08:02.810 22483.889 - 22584.714: 97.8625% ( 5) 00:08:02.810 22584.714 - 22685.538: 97.9125% ( 4) 00:08:02.810 22685.538 - 22786.363: 97.9625% ( 4) 00:08:02.810 22786.363 - 22887.188: 98.0625% ( 8) 00:08:02.810 22887.188 - 22988.012: 98.1375% ( 6) 00:08:02.810 22988.012 - 23088.837: 98.2750% ( 11) 00:08:02.810 23088.837 - 23189.662: 98.3500% ( 6) 00:08:02.810 23189.662 - 23290.486: 98.4500% ( 8) 00:08:02.810 23290.486 - 23391.311: 98.5500% ( 8) 00:08:02.810 23391.311 - 23492.135: 98.6250% ( 6) 00:08:02.810 23492.135 - 23592.960: 98.6875% ( 5) 00:08:02.810 23592.960 - 23693.785: 98.7375% ( 4) 00:08:02.810 23693.785 - 23794.609: 98.7750% ( 3) 00:08:02.810 23794.609 - 23895.434: 98.8250% ( 4) 00:08:02.810 23895.434 - 23996.258: 98.8875% ( 5) 00:08:02.810 23996.258 - 24097.083: 98.9375% ( 4) 00:08:02.810 24097.083 - 24197.908: 98.9875% ( 4) 00:08:02.810 24197.908 - 24298.732: 99.0250% ( 3) 00:08:02.810 24298.732 - 24399.557: 99.0625% ( 3) 00:08:02.810 24399.557 - 24500.382: 99.1125% ( 4) 00:08:02.810 24500.382 - 24601.206: 99.1625% ( 4) 00:08:02.810 24601.206 - 24702.031: 99.2000% ( 3) 00:08:02.810 29037.489 - 29239.138: 99.2125% ( 1) 00:08:02.810 29642.437 - 29844.086: 99.3000% ( 7) 00:08:02.810 29844.086 - 30045.735: 99.4125% ( 9) 00:08:02.810 30045.735 - 30247.385: 99.5500% ( 11) 00:08:02.810 30247.385 - 30449.034: 99.6625% ( 9) 00:08:02.810 30449.034 - 30650.683: 99.8000% ( 11) 00:08:02.811 30650.683 - 30852.332: 99.9375% ( 11) 00:08:02.811 30852.332 - 31053.982: 100.0000% ( 5) 00:08:02.811 00:08:02.811 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:02.811 ============================================================================== 00:08:02.811 Range in us Cumulative IO count 00:08:02.811 3680.098 - 3705.305: 0.0125% ( 1) 00:08:02.811 3705.305 - 3730.511: 0.0375% ( 2) 00:08:02.811 3755.717 - 3780.923: 0.0500% ( 1) 00:08:02.811 3780.923 - 3806.129: 0.0750% ( 2) 00:08:02.811 3806.129 - 3831.335: 0.1000% ( 2) 00:08:02.811 3831.335 - 3856.542: 0.1250% ( 2) 00:08:02.811 3856.542 - 3881.748: 0.1375% ( 1) 00:08:02.811 3881.748 - 3906.954: 0.1625% ( 2) 00:08:02.811 3906.954 - 3932.160: 0.1875% ( 2) 00:08:02.811 3932.160 - 3957.366: 0.2125% ( 2) 00:08:02.811 3957.366 - 3982.572: 0.2375% ( 2) 00:08:02.811 3982.572 - 4007.778: 0.2625% ( 2) 00:08:02.811 4007.778 - 4032.985: 0.2875% ( 2) 00:08:02.811 4032.985 - 4058.191: 0.3000% ( 1) 00:08:02.811 4058.191 - 4083.397: 0.3250% ( 2) 00:08:02.811 4083.397 - 4108.603: 0.3500% ( 2) 00:08:02.811 4108.603 - 4133.809: 0.3750% ( 2) 00:08:02.811 4133.809 - 4159.015: 0.4000% ( 2) 00:08:02.811 4159.015 - 4184.222: 0.4250% ( 2) 00:08:02.811 4184.222 - 4209.428: 0.4375% ( 1) 00:08:02.811 4209.428 - 4234.634: 0.4625% ( 2) 00:08:02.811 4234.634 - 4259.840: 0.4875% ( 2) 00:08:02.811 4259.840 - 4285.046: 0.5125% ( 2) 00:08:02.811 4285.046 - 4310.252: 0.5375% ( 2) 00:08:02.811 4310.252 - 4335.458: 0.5625% ( 2) 00:08:02.811 4335.458 - 4360.665: 0.5875% ( 2) 00:08:02.811 4360.665 - 4385.871: 0.6125% ( 2) 00:08:02.811 4385.871 - 4411.077: 0.6250% ( 1) 00:08:02.811 4411.077 - 4436.283: 0.6500% ( 2) 00:08:02.811 4436.283 - 4461.489: 0.6625% ( 1) 00:08:02.811 4461.489 - 4486.695: 0.6875% ( 2) 00:08:02.811 4486.695 - 4511.902: 0.7125% ( 2) 00:08:02.811 4511.902 - 4537.108: 0.7375% ( 2) 00:08:02.811 4537.108 - 4562.314: 0.7625% ( 2) 00:08:02.811 4562.314 - 4587.520: 0.7875% ( 2) 00:08:02.811 4587.520 - 4612.726: 0.8000% ( 1) 00:08:02.811 8318.031 - 8368.443: 0.8375% ( 3) 00:08:02.811 8368.443 - 8418.855: 0.8625% ( 2) 00:08:02.811 8469.268 - 8519.680: 0.9125% ( 4) 00:08:02.811 8519.680 - 8570.092: 0.9625% ( 4) 00:08:02.811 8570.092 - 8620.505: 1.0125% ( 4) 00:08:02.811 8620.505 - 8670.917: 1.0500% ( 3) 00:08:02.811 8670.917 - 8721.329: 1.1000% ( 4) 00:08:02.811 8721.329 - 8771.742: 1.1500% ( 4) 00:08:02.811 8771.742 - 8822.154: 1.1875% ( 3) 00:08:02.811 8822.154 - 8872.566: 1.2375% ( 4) 00:08:02.811 8872.566 - 8922.978: 1.2875% ( 4) 00:08:02.811 8922.978 - 8973.391: 1.3375% ( 4) 00:08:02.811 8973.391 - 9023.803: 1.3750% ( 3) 00:08:02.811 9023.803 - 9074.215: 1.4250% ( 4) 00:08:02.811 9074.215 - 9124.628: 1.4750% ( 4) 00:08:02.811 9124.628 - 9175.040: 1.5125% ( 3) 00:08:02.811 9175.040 - 9225.452: 1.5500% ( 3) 00:08:02.811 9225.452 - 9275.865: 1.5875% ( 3) 00:08:02.811 9275.865 - 9326.277: 1.6000% ( 1) 00:08:02.811 10788.234 - 10838.646: 1.6250% ( 2) 00:08:02.811 10838.646 - 10889.058: 1.6500% ( 2) 00:08:02.811 10889.058 - 10939.471: 1.6625% ( 1) 00:08:02.811 10939.471 - 10989.883: 1.7250% ( 5) 00:08:02.811 10989.883 - 11040.295: 1.7875% ( 5) 00:08:02.811 11040.295 - 11090.708: 1.8750% ( 7) 00:08:02.811 11090.708 - 11141.120: 1.9750% ( 8) 00:08:02.811 11141.120 - 11191.532: 2.1000% ( 10) 00:08:02.811 11191.532 - 11241.945: 2.2000% ( 8) 00:08:02.811 11241.945 - 11292.357: 2.2750% ( 6) 00:08:02.811 11292.357 - 11342.769: 2.3625% ( 7) 00:08:02.811 11342.769 - 11393.182: 2.4375% ( 6) 00:08:02.811 11393.182 - 11443.594: 2.5500% ( 9) 00:08:02.811 11443.594 - 11494.006: 2.6750% ( 10) 00:08:02.811 11494.006 - 11544.418: 2.8250% ( 12) 00:08:02.811 11544.418 - 11594.831: 2.9750% ( 12) 00:08:02.811 11594.831 - 11645.243: 3.1375% ( 13) 00:08:02.811 11645.243 - 11695.655: 3.3250% ( 15) 00:08:02.811 11695.655 - 11746.068: 3.4750% ( 12) 00:08:02.811 11746.068 - 11796.480: 3.6875% ( 17) 00:08:02.811 11796.480 - 11846.892: 3.9125% ( 18) 00:08:02.811 11846.892 - 11897.305: 4.1750% ( 21) 00:08:02.811 11897.305 - 11947.717: 4.4125% ( 19) 00:08:02.811 11947.717 - 11998.129: 4.6625% ( 20) 00:08:02.811 11998.129 - 12048.542: 4.9500% ( 23) 00:08:02.811 12048.542 - 12098.954: 5.2500% ( 24) 00:08:02.811 12098.954 - 12149.366: 5.4875% ( 19) 00:08:02.811 12149.366 - 12199.778: 5.7875% ( 24) 00:08:02.811 12199.778 - 12250.191: 6.0375% ( 20) 00:08:02.811 12250.191 - 12300.603: 6.2625% ( 18) 00:08:02.811 12300.603 - 12351.015: 6.4250% ( 13) 00:08:02.811 12351.015 - 12401.428: 6.6625% ( 19) 00:08:02.811 12401.428 - 12451.840: 6.9000% ( 19) 00:08:02.811 12451.840 - 12502.252: 7.1125% ( 17) 00:08:02.811 12502.252 - 12552.665: 7.3125% ( 16) 00:08:02.811 12552.665 - 12603.077: 7.5500% ( 19) 00:08:02.811 12603.077 - 12653.489: 7.8000% ( 20) 00:08:02.811 12653.489 - 12703.902: 8.0375% ( 19) 00:08:02.811 12703.902 - 12754.314: 8.2875% ( 20) 00:08:02.811 12754.314 - 12804.726: 8.6625% ( 30) 00:08:02.811 12804.726 - 12855.138: 9.0000% ( 27) 00:08:02.811 12855.138 - 12905.551: 9.2375% ( 19) 00:08:02.811 12905.551 - 13006.375: 9.7000% ( 37) 00:08:02.811 13006.375 - 13107.200: 10.0875% ( 31) 00:08:02.811 13107.200 - 13208.025: 10.5500% ( 37) 00:08:02.811 13208.025 - 13308.849: 11.2625% ( 57) 00:08:02.811 13308.849 - 13409.674: 11.9375% ( 54) 00:08:02.811 13409.674 - 13510.498: 12.5875% ( 52) 00:08:02.811 13510.498 - 13611.323: 13.1875% ( 48) 00:08:02.811 13611.323 - 13712.148: 14.0000% ( 65) 00:08:02.811 13712.148 - 13812.972: 15.0250% ( 82) 00:08:02.811 13812.972 - 13913.797: 16.1375% ( 89) 00:08:02.811 13913.797 - 14014.622: 17.4500% ( 105) 00:08:02.811 14014.622 - 14115.446: 18.9625% ( 121) 00:08:02.811 14115.446 - 14216.271: 20.5500% ( 127) 00:08:02.811 14216.271 - 14317.095: 22.1250% ( 126) 00:08:02.811 14317.095 - 14417.920: 23.6375% ( 121) 00:08:02.811 14417.920 - 14518.745: 25.4000% ( 141) 00:08:02.811 14518.745 - 14619.569: 27.1625% ( 141) 00:08:02.811 14619.569 - 14720.394: 28.8500% ( 135) 00:08:02.811 14720.394 - 14821.218: 30.6750% ( 146) 00:08:02.811 14821.218 - 14922.043: 32.3250% ( 132) 00:08:02.811 14922.043 - 15022.868: 34.1250% ( 144) 00:08:02.811 15022.868 - 15123.692: 35.8875% ( 141) 00:08:02.811 15123.692 - 15224.517: 38.0875% ( 176) 00:08:02.811 15224.517 - 15325.342: 40.2125% ( 170) 00:08:02.811 15325.342 - 15426.166: 42.2750% ( 165) 00:08:02.811 15426.166 - 15526.991: 44.4000% ( 170) 00:08:02.811 15526.991 - 15627.815: 46.3750% ( 158) 00:08:02.811 15627.815 - 15728.640: 48.3625% ( 159) 00:08:02.811 15728.640 - 15829.465: 50.2125% ( 148) 00:08:02.811 15829.465 - 15930.289: 51.9250% ( 137) 00:08:02.811 15930.289 - 16031.114: 53.4500% ( 122) 00:08:02.811 16031.114 - 16131.938: 54.8875% ( 115) 00:08:02.811 16131.938 - 16232.763: 56.3375% ( 116) 00:08:02.811 16232.763 - 16333.588: 57.6875% ( 108) 00:08:02.811 16333.588 - 16434.412: 58.9375% ( 100) 00:08:02.811 16434.412 - 16535.237: 60.1250% ( 95) 00:08:02.811 16535.237 - 16636.062: 61.5250% ( 112) 00:08:02.811 16636.062 - 16736.886: 63.0500% ( 122) 00:08:02.811 16736.886 - 16837.711: 64.7125% ( 133) 00:08:02.811 16837.711 - 16938.535: 66.2500% ( 123) 00:08:02.811 16938.535 - 17039.360: 67.9000% ( 132) 00:08:02.811 17039.360 - 17140.185: 69.8750% ( 158) 00:08:02.811 17140.185 - 17241.009: 71.6875% ( 145) 00:08:02.811 17241.009 - 17341.834: 73.5250% ( 147) 00:08:02.811 17341.834 - 17442.658: 75.3125% ( 143) 00:08:02.811 17442.658 - 17543.483: 77.1000% ( 143) 00:08:02.811 17543.483 - 17644.308: 78.6125% ( 121) 00:08:02.811 17644.308 - 17745.132: 80.1250% ( 121) 00:08:02.811 17745.132 - 17845.957: 81.7500% ( 130) 00:08:02.811 17845.957 - 17946.782: 82.9125% ( 93) 00:08:02.811 17946.782 - 18047.606: 83.9125% ( 80) 00:08:02.811 18047.606 - 18148.431: 84.8500% ( 75) 00:08:02.811 18148.431 - 18249.255: 85.8250% ( 78) 00:08:02.811 18249.255 - 18350.080: 86.7250% ( 72) 00:08:02.811 18350.080 - 18450.905: 87.7750% ( 84) 00:08:02.811 18450.905 - 18551.729: 88.6125% ( 67) 00:08:02.811 18551.729 - 18652.554: 89.2125% ( 48) 00:08:02.811 18652.554 - 18753.378: 89.5750% ( 29) 00:08:02.811 18753.378 - 18854.203: 89.9625% ( 31) 00:08:02.811 18854.203 - 18955.028: 90.3250% ( 29) 00:08:02.811 18955.028 - 19055.852: 90.7625% ( 35) 00:08:02.811 19055.852 - 19156.677: 91.1500% ( 31) 00:08:02.811 19156.677 - 19257.502: 91.6250% ( 38) 00:08:02.811 19257.502 - 19358.326: 91.9625% ( 27) 00:08:02.811 19358.326 - 19459.151: 92.3250% ( 29) 00:08:02.811 19459.151 - 19559.975: 92.7125% ( 31) 00:08:02.811 19559.975 - 19660.800: 93.0750% ( 29) 00:08:02.811 19660.800 - 19761.625: 93.4375% ( 29) 00:08:02.811 19761.625 - 19862.449: 93.7625% ( 26) 00:08:02.811 19862.449 - 19963.274: 94.1125% ( 28) 00:08:02.811 19963.274 - 20064.098: 94.4125% ( 24) 00:08:02.811 20064.098 - 20164.923: 94.7625% ( 28) 00:08:02.811 20164.923 - 20265.748: 95.0125% ( 20) 00:08:02.811 20265.748 - 20366.572: 95.2375% ( 18) 00:08:02.811 20366.572 - 20467.397: 95.4750% ( 19) 00:08:02.811 20467.397 - 20568.222: 95.6625% ( 15) 00:08:02.811 20568.222 - 20669.046: 95.8625% ( 16) 00:08:02.811 20669.046 - 20769.871: 96.0500% ( 15) 00:08:02.811 20769.871 - 20870.695: 96.2000% ( 12) 00:08:02.811 20870.695 - 20971.520: 96.3000% ( 8) 00:08:02.811 20971.520 - 21072.345: 96.4000% ( 8) 00:08:02.811 21072.345 - 21173.169: 96.5125% ( 9) 00:08:02.811 21173.169 - 21273.994: 96.5625% ( 4) 00:08:02.812 21273.994 - 21374.818: 96.5875% ( 2) 00:08:02.812 21374.818 - 21475.643: 96.6125% ( 2) 00:08:02.812 21475.643 - 21576.468: 96.6500% ( 3) 00:08:02.812 21576.468 - 21677.292: 96.6875% ( 3) 00:08:02.812 21677.292 - 21778.117: 96.7500% ( 5) 00:08:02.812 21778.117 - 21878.942: 96.9000% ( 12) 00:08:02.812 21878.942 - 21979.766: 97.0625% ( 13) 00:08:02.812 21979.766 - 22080.591: 97.2000% ( 11) 00:08:02.812 22080.591 - 22181.415: 97.3250% ( 10) 00:08:02.812 22181.415 - 22282.240: 97.4500% ( 10) 00:08:02.812 22282.240 - 22383.065: 97.5750% ( 10) 00:08:02.812 22383.065 - 22483.889: 97.7125% ( 11) 00:08:02.812 22483.889 - 22584.714: 97.8500% ( 11) 00:08:02.812 22584.714 - 22685.538: 97.9875% ( 11) 00:08:02.812 22685.538 - 22786.363: 98.1250% ( 11) 00:08:02.812 22786.363 - 22887.188: 98.2750% ( 12) 00:08:02.812 22887.188 - 22988.012: 98.3375% ( 5) 00:08:02.812 22988.012 - 23088.837: 98.4000% ( 5) 00:08:02.812 23492.135 - 23592.960: 98.4125% ( 1) 00:08:02.812 23592.960 - 23693.785: 98.4625% ( 4) 00:08:02.812 23693.785 - 23794.609: 98.5375% ( 6) 00:08:02.812 23794.609 - 23895.434: 98.5875% ( 4) 00:08:02.812 23895.434 - 23996.258: 98.6375% ( 4) 00:08:02.812 23996.258 - 24097.083: 98.7000% ( 5) 00:08:02.812 24097.083 - 24197.908: 98.7500% ( 4) 00:08:02.812 24197.908 - 24298.732: 98.8000% ( 4) 00:08:02.812 24298.732 - 24399.557: 98.8625% ( 5) 00:08:02.812 24399.557 - 24500.382: 98.9125% ( 4) 00:08:02.812 24500.382 - 24601.206: 98.9750% ( 5) 00:08:02.812 24601.206 - 24702.031: 99.0250% ( 4) 00:08:02.812 24702.031 - 24802.855: 99.0750% ( 4) 00:08:02.812 24802.855 - 24903.680: 99.1250% ( 4) 00:08:02.812 24903.680 - 25004.505: 99.1750% ( 4) 00:08:02.812 25004.505 - 25105.329: 99.2000% ( 2) 00:08:02.812 29037.489 - 29239.138: 99.2250% ( 2) 00:08:02.812 29239.138 - 29440.788: 99.3250% ( 8) 00:08:02.812 29440.788 - 29642.437: 99.4500% ( 10) 00:08:02.812 29642.437 - 29844.086: 99.5750% ( 10) 00:08:02.812 29844.086 - 30045.735: 99.7000% ( 10) 00:08:02.812 30045.735 - 30247.385: 99.8250% ( 10) 00:08:02.812 30247.385 - 30449.034: 99.9500% ( 10) 00:08:02.812 30449.034 - 30650.683: 100.0000% ( 4) 00:08:02.812 00:08:02.812 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:02.812 ============================================================================== 00:08:02.812 Range in us Cumulative IO count 00:08:02.812 3352.418 - 3377.625: 0.0375% ( 3) 00:08:02.812 3377.625 - 3402.831: 0.0625% ( 2) 00:08:02.812 3402.831 - 3428.037: 0.0875% ( 2) 00:08:02.812 3428.037 - 3453.243: 0.1125% ( 2) 00:08:02.812 3453.243 - 3478.449: 0.1375% ( 2) 00:08:02.812 3478.449 - 3503.655: 0.1625% ( 2) 00:08:02.812 3503.655 - 3528.862: 0.2000% ( 3) 00:08:02.812 3528.862 - 3554.068: 0.2125% ( 1) 00:08:02.812 3554.068 - 3579.274: 0.2250% ( 1) 00:08:02.812 3579.274 - 3604.480: 0.2375% ( 1) 00:08:02.812 3604.480 - 3629.686: 0.2750% ( 3) 00:08:02.812 3629.686 - 3654.892: 0.2875% ( 1) 00:08:02.812 3654.892 - 3680.098: 0.3125% ( 2) 00:08:02.812 3680.098 - 3705.305: 0.3250% ( 1) 00:08:02.812 3705.305 - 3730.511: 0.3500% ( 2) 00:08:02.812 3730.511 - 3755.717: 0.3750% ( 2) 00:08:02.812 3755.717 - 3780.923: 0.4000% ( 2) 00:08:02.812 3780.923 - 3806.129: 0.4250% ( 2) 00:08:02.812 3806.129 - 3831.335: 0.4375% ( 1) 00:08:02.812 3831.335 - 3856.542: 0.4625% ( 2) 00:08:02.812 3856.542 - 3881.748: 0.4875% ( 2) 00:08:02.812 3881.748 - 3906.954: 0.5125% ( 2) 00:08:02.812 3906.954 - 3932.160: 0.5375% ( 2) 00:08:02.812 3932.160 - 3957.366: 0.5500% ( 1) 00:08:02.812 3957.366 - 3982.572: 0.5750% ( 2) 00:08:02.812 3982.572 - 4007.778: 0.6000% ( 2) 00:08:02.812 4007.778 - 4032.985: 0.6250% ( 2) 00:08:02.812 4032.985 - 4058.191: 0.6500% ( 2) 00:08:02.812 4058.191 - 4083.397: 0.6750% ( 2) 00:08:02.812 4083.397 - 4108.603: 0.7000% ( 2) 00:08:02.812 4108.603 - 4133.809: 0.7125% ( 1) 00:08:02.812 4133.809 - 4159.015: 0.7375% ( 2) 00:08:02.812 4159.015 - 4184.222: 0.7625% ( 2) 00:08:02.812 4184.222 - 4209.428: 0.7875% ( 2) 00:08:02.812 4209.428 - 4234.634: 0.8000% ( 1) 00:08:02.812 7561.846 - 7612.258: 0.8125% ( 1) 00:08:02.812 7612.258 - 7662.671: 0.8250% ( 1) 00:08:02.812 7662.671 - 7713.083: 0.8750% ( 4) 00:08:02.812 7713.083 - 7763.495: 0.9125% ( 3) 00:08:02.812 7763.495 - 7813.908: 0.9375% ( 2) 00:08:02.812 7864.320 - 7914.732: 0.9500% ( 1) 00:08:02.812 7914.732 - 7965.145: 1.0000% ( 4) 00:08:02.812 7965.145 - 8015.557: 1.0500% ( 4) 00:08:02.812 8015.557 - 8065.969: 1.1000% ( 4) 00:08:02.812 8065.969 - 8116.382: 1.1375% ( 3) 00:08:02.812 8116.382 - 8166.794: 1.1875% ( 4) 00:08:02.812 8166.794 - 8217.206: 1.2375% ( 4) 00:08:02.812 8217.206 - 8267.618: 1.2875% ( 4) 00:08:02.812 8267.618 - 8318.031: 1.3375% ( 4) 00:08:02.812 8318.031 - 8368.443: 1.3750% ( 3) 00:08:02.812 8368.443 - 8418.855: 1.4125% ( 3) 00:08:02.812 8418.855 - 8469.268: 1.4625% ( 4) 00:08:02.812 8469.268 - 8519.680: 1.5125% ( 4) 00:08:02.812 8519.680 - 8570.092: 1.5625% ( 4) 00:08:02.812 8570.092 - 8620.505: 1.6000% ( 3) 00:08:02.812 10838.646 - 10889.058: 1.6250% ( 2) 00:08:02.812 10889.058 - 10939.471: 1.6625% ( 3) 00:08:02.812 10939.471 - 10989.883: 1.6875% ( 2) 00:08:02.812 10989.883 - 11040.295: 1.7375% ( 4) 00:08:02.812 11040.295 - 11090.708: 1.8125% ( 6) 00:08:02.812 11090.708 - 11141.120: 1.8750% ( 5) 00:08:02.812 11141.120 - 11191.532: 1.9750% ( 8) 00:08:02.812 11191.532 - 11241.945: 2.0750% ( 8) 00:08:02.812 11241.945 - 11292.357: 2.2125% ( 11) 00:08:02.812 11292.357 - 11342.769: 2.3500% ( 11) 00:08:02.812 11342.769 - 11393.182: 2.4625% ( 9) 00:08:02.812 11393.182 - 11443.594: 2.6000% ( 11) 00:08:02.812 11443.594 - 11494.006: 2.7250% ( 10) 00:08:02.812 11494.006 - 11544.418: 2.8750% ( 12) 00:08:02.812 11544.418 - 11594.831: 3.0125% ( 11) 00:08:02.812 11594.831 - 11645.243: 3.1500% ( 11) 00:08:02.812 11645.243 - 11695.655: 3.3250% ( 14) 00:08:02.812 11695.655 - 11746.068: 3.4750% ( 12) 00:08:02.812 11746.068 - 11796.480: 3.6750% ( 16) 00:08:02.812 11796.480 - 11846.892: 3.9125% ( 19) 00:08:02.812 11846.892 - 11897.305: 4.1250% ( 17) 00:08:02.812 11897.305 - 11947.717: 4.3625% ( 19) 00:08:02.812 11947.717 - 11998.129: 4.5625% ( 16) 00:08:02.812 11998.129 - 12048.542: 4.7875% ( 18) 00:08:02.812 12048.542 - 12098.954: 4.9750% ( 15) 00:08:02.812 12098.954 - 12149.366: 5.1625% ( 15) 00:08:02.812 12149.366 - 12199.778: 5.3625% ( 16) 00:08:02.812 12199.778 - 12250.191: 5.5000% ( 11) 00:08:02.812 12250.191 - 12300.603: 5.6875% ( 15) 00:08:02.812 12300.603 - 12351.015: 5.8625% ( 14) 00:08:02.812 12351.015 - 12401.428: 6.0375% ( 14) 00:08:02.812 12401.428 - 12451.840: 6.1500% ( 9) 00:08:02.812 12451.840 - 12502.252: 6.3375% ( 15) 00:08:02.812 12502.252 - 12552.665: 6.5500% ( 17) 00:08:02.812 12552.665 - 12603.077: 6.8375% ( 23) 00:08:02.812 12603.077 - 12653.489: 7.1000% ( 21) 00:08:02.812 12653.489 - 12703.902: 7.4250% ( 26) 00:08:02.812 12703.902 - 12754.314: 7.7125% ( 23) 00:08:02.812 12754.314 - 12804.726: 8.0000% ( 23) 00:08:02.812 12804.726 - 12855.138: 8.2875% ( 23) 00:08:02.812 12855.138 - 12905.551: 8.5875% ( 24) 00:08:02.812 12905.551 - 13006.375: 9.2250% ( 51) 00:08:02.812 13006.375 - 13107.200: 9.9875% ( 61) 00:08:02.812 13107.200 - 13208.025: 10.9875% ( 80) 00:08:02.812 13208.025 - 13308.849: 11.7250% ( 59) 00:08:02.812 13308.849 - 13409.674: 12.5500% ( 66) 00:08:02.812 13409.674 - 13510.498: 13.4250% ( 70) 00:08:02.812 13510.498 - 13611.323: 14.3125% ( 71) 00:08:02.812 13611.323 - 13712.148: 15.5000% ( 95) 00:08:02.812 13712.148 - 13812.972: 16.5375% ( 83) 00:08:02.812 13812.972 - 13913.797: 17.7000% ( 93) 00:08:02.812 13913.797 - 14014.622: 18.8750% ( 94) 00:08:02.812 14014.622 - 14115.446: 20.0625% ( 95) 00:08:02.812 14115.446 - 14216.271: 21.4250% ( 109) 00:08:02.812 14216.271 - 14317.095: 22.7750% ( 108) 00:08:02.812 14317.095 - 14417.920: 24.5125% ( 139) 00:08:02.812 14417.920 - 14518.745: 26.3000% ( 143) 00:08:02.812 14518.745 - 14619.569: 27.8250% ( 122) 00:08:02.812 14619.569 - 14720.394: 29.3375% ( 121) 00:08:02.812 14720.394 - 14821.218: 30.9000% ( 125) 00:08:02.812 14821.218 - 14922.043: 32.6875% ( 143) 00:08:02.812 14922.043 - 15022.868: 34.6000% ( 153) 00:08:02.812 15022.868 - 15123.692: 36.7250% ( 170) 00:08:02.812 15123.692 - 15224.517: 38.7500% ( 162) 00:08:02.812 15224.517 - 15325.342: 40.9125% ( 173) 00:08:02.812 15325.342 - 15426.166: 42.8625% ( 156) 00:08:02.812 15426.166 - 15526.991: 44.7625% ( 152) 00:08:02.812 15526.991 - 15627.815: 46.7625% ( 160) 00:08:02.812 15627.815 - 15728.640: 48.5125% ( 140) 00:08:02.812 15728.640 - 15829.465: 50.1375% ( 130) 00:08:02.812 15829.465 - 15930.289: 51.6500% ( 121) 00:08:02.812 15930.289 - 16031.114: 53.1500% ( 120) 00:08:02.812 16031.114 - 16131.938: 54.7125% ( 125) 00:08:02.812 16131.938 - 16232.763: 56.1750% ( 117) 00:08:02.812 16232.763 - 16333.588: 57.6625% ( 119) 00:08:02.812 16333.588 - 16434.412: 59.1000% ( 115) 00:08:02.812 16434.412 - 16535.237: 60.5375% ( 115) 00:08:02.812 16535.237 - 16636.062: 62.2125% ( 134) 00:08:02.812 16636.062 - 16736.886: 63.7625% ( 124) 00:08:02.812 16736.886 - 16837.711: 65.1500% ( 111) 00:08:02.812 16837.711 - 16938.535: 66.5000% ( 108) 00:08:02.812 16938.535 - 17039.360: 68.1625% ( 133) 00:08:02.812 17039.360 - 17140.185: 69.8875% ( 138) 00:08:02.812 17140.185 - 17241.009: 71.4000% ( 121) 00:08:02.813 17241.009 - 17341.834: 73.0875% ( 135) 00:08:02.813 17341.834 - 17442.658: 74.7375% ( 132) 00:08:02.813 17442.658 - 17543.483: 76.1875% ( 116) 00:08:02.813 17543.483 - 17644.308: 77.6250% ( 115) 00:08:02.813 17644.308 - 17745.132: 79.1250% ( 120) 00:08:02.813 17745.132 - 17845.957: 80.6125% ( 119) 00:08:02.813 17845.957 - 17946.782: 82.0500% ( 115) 00:08:02.813 17946.782 - 18047.606: 83.3500% ( 104) 00:08:02.813 18047.606 - 18148.431: 84.5375% ( 95) 00:08:02.813 18148.431 - 18249.255: 85.6625% ( 90) 00:08:02.813 18249.255 - 18350.080: 86.5125% ( 68) 00:08:02.813 18350.080 - 18450.905: 87.3875% ( 70) 00:08:02.813 18450.905 - 18551.729: 88.2250% ( 67) 00:08:02.813 18551.729 - 18652.554: 89.1000% ( 70) 00:08:02.813 18652.554 - 18753.378: 89.8750% ( 62) 00:08:02.813 18753.378 - 18854.203: 90.4875% ( 49) 00:08:02.813 18854.203 - 18955.028: 91.0500% ( 45) 00:08:02.813 18955.028 - 19055.852: 91.5125% ( 37) 00:08:02.813 19055.852 - 19156.677: 91.8500% ( 27) 00:08:02.813 19156.677 - 19257.502: 92.1500% ( 24) 00:08:02.813 19257.502 - 19358.326: 92.4375% ( 23) 00:08:02.813 19358.326 - 19459.151: 92.6500% ( 17) 00:08:02.813 19459.151 - 19559.975: 93.0500% ( 32) 00:08:02.813 19559.975 - 19660.800: 93.3375% ( 23) 00:08:02.813 19660.800 - 19761.625: 93.6125% ( 22) 00:08:02.813 19761.625 - 19862.449: 93.8875% ( 22) 00:08:02.813 19862.449 - 19963.274: 94.1250% ( 19) 00:08:02.813 19963.274 - 20064.098: 94.3125% ( 15) 00:08:02.813 20064.098 - 20164.923: 94.5625% ( 20) 00:08:02.813 20164.923 - 20265.748: 94.8250% ( 21) 00:08:02.813 20265.748 - 20366.572: 95.1250% ( 24) 00:08:02.813 20366.572 - 20467.397: 95.4250% ( 24) 00:08:02.813 20467.397 - 20568.222: 95.7625% ( 27) 00:08:02.813 20568.222 - 20669.046: 96.0250% ( 21) 00:08:02.813 20669.046 - 20769.871: 96.2000% ( 14) 00:08:02.813 20769.871 - 20870.695: 96.3000% ( 8) 00:08:02.813 20870.695 - 20971.520: 96.4000% ( 8) 00:08:02.813 20971.520 - 21072.345: 96.5125% ( 9) 00:08:02.813 21072.345 - 21173.169: 96.6250% ( 9) 00:08:02.813 21173.169 - 21273.994: 96.7000% ( 6) 00:08:02.813 21273.994 - 21374.818: 96.7875% ( 7) 00:08:02.813 21374.818 - 21475.643: 96.8625% ( 6) 00:08:02.813 21475.643 - 21576.468: 96.9250% ( 5) 00:08:02.813 21576.468 - 21677.292: 96.9875% ( 5) 00:08:02.813 21677.292 - 21778.117: 97.0375% ( 4) 00:08:02.813 21778.117 - 21878.942: 97.1000% ( 5) 00:08:02.813 21878.942 - 21979.766: 97.1750% ( 6) 00:08:02.813 21979.766 - 22080.591: 97.2375% ( 5) 00:08:02.813 22080.591 - 22181.415: 97.3000% ( 5) 00:08:02.813 22181.415 - 22282.240: 97.3500% ( 4) 00:08:02.813 22282.240 - 22383.065: 97.4625% ( 9) 00:08:02.813 22383.065 - 22483.889: 97.6250% ( 13) 00:08:02.813 22483.889 - 22584.714: 97.8000% ( 14) 00:08:02.813 22584.714 - 22685.538: 97.9125% ( 9) 00:08:02.813 22685.538 - 22786.363: 98.0125% ( 8) 00:08:02.813 22786.363 - 22887.188: 98.0625% ( 4) 00:08:02.813 22887.188 - 22988.012: 98.1375% ( 6) 00:08:02.813 22988.012 - 23088.837: 98.2000% ( 5) 00:08:02.813 23088.837 - 23189.662: 98.2500% ( 4) 00:08:02.813 23189.662 - 23290.486: 98.3250% ( 6) 00:08:02.813 23290.486 - 23391.311: 98.3875% ( 5) 00:08:02.813 23391.311 - 23492.135: 98.4000% ( 1) 00:08:02.813 24197.908 - 24298.732: 98.4500% ( 4) 00:08:02.813 24298.732 - 24399.557: 98.4875% ( 3) 00:08:02.813 24399.557 - 24500.382: 98.5375% ( 4) 00:08:02.813 24500.382 - 24601.206: 98.5875% ( 4) 00:08:02.813 24601.206 - 24702.031: 98.6250% ( 3) 00:08:02.813 24702.031 - 24802.855: 98.6625% ( 3) 00:08:02.813 24802.855 - 24903.680: 98.7125% ( 4) 00:08:02.813 24903.680 - 25004.505: 98.7500% ( 3) 00:08:02.813 25004.505 - 25105.329: 98.8000% ( 4) 00:08:02.813 25105.329 - 25206.154: 98.8375% ( 3) 00:08:02.813 25206.154 - 25306.978: 98.8750% ( 3) 00:08:02.813 25306.978 - 25407.803: 98.9250% ( 4) 00:08:02.813 25407.803 - 25508.628: 98.9625% ( 3) 00:08:02.813 25508.628 - 25609.452: 99.0125% ( 4) 00:08:02.813 25609.452 - 25710.277: 99.0500% ( 3) 00:08:02.813 25710.277 - 25811.102: 99.1000% ( 4) 00:08:02.813 25811.102 - 26012.751: 99.1750% ( 6) 00:08:02.813 26012.751 - 26214.400: 99.2000% ( 2) 00:08:02.813 28835.840 - 29037.489: 99.3000% ( 8) 00:08:02.813 29037.489 - 29239.138: 99.4375% ( 11) 00:08:02.813 29239.138 - 29440.788: 99.5625% ( 10) 00:08:02.813 29440.788 - 29642.437: 99.6875% ( 10) 00:08:02.813 29642.437 - 29844.086: 99.8000% ( 9) 00:08:02.813 29844.086 - 30045.735: 99.9250% ( 10) 00:08:02.813 30045.735 - 30247.385: 100.0000% ( 6) 00:08:02.813 00:08:02.813 04:13:48 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:04.199 Initializing NVMe Controllers 00:08:04.199 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:04.199 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:04.199 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:04.199 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:04.199 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:04.199 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:04.199 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:04.199 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:04.199 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:04.199 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:04.199 Initialization complete. Launching workers. 00:08:04.199 ======================================================== 00:08:04.199 Latency(us) 00:08:04.199 Device Information : IOPS MiB/s Average min max 00:08:04.199 PCIE (0000:00:13.0) NSID 1 from core 0: 9840.49 115.32 13017.77 8834.97 25934.14 00:08:04.199 PCIE (0000:00:10.0) NSID 1 from core 0: 9840.49 115.32 13008.89 7792.56 25825.18 00:08:04.199 PCIE (0000:00:11.0) NSID 1 from core 0: 9840.49 115.32 12997.72 7652.18 25074.21 00:08:04.199 PCIE (0000:00:12.0) NSID 1 from core 0: 9840.49 115.32 12987.37 6465.77 25951.26 00:08:04.199 PCIE (0000:00:12.0) NSID 2 from core 0: 9840.49 115.32 12976.91 5082.38 25481.52 00:08:04.199 PCIE (0000:00:12.0) NSID 3 from core 0: 9840.49 115.32 12966.41 4203.02 25518.40 00:08:04.199 ======================================================== 00:08:04.199 Total : 59042.95 691.91 12992.51 4203.02 25951.26 00:08:04.199 00:08:04.199 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:04.199 ================================================================================= 00:08:04.199 1.00000% : 10183.286us 00:08:04.199 10.00000% : 11141.120us 00:08:04.199 25.00000% : 11796.480us 00:08:04.199 50.00000% : 12653.489us 00:08:04.199 75.00000% : 13712.148us 00:08:04.199 90.00000% : 15627.815us 00:08:04.199 95.00000% : 16938.535us 00:08:04.199 98.00000% : 18652.554us 00:08:04.199 99.00000% : 19559.975us 00:08:04.199 99.50000% : 25206.154us 00:08:04.199 99.90000% : 25811.102us 00:08:04.199 99.99000% : 26012.751us 00:08:04.199 99.99900% : 26012.751us 00:08:04.199 99.99990% : 26012.751us 00:08:04.199 99.99999% : 26012.751us 00:08:04.199 00:08:04.199 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:04.199 ================================================================================= 00:08:04.199 1.00000% : 10032.049us 00:08:04.199 10.00000% : 11090.708us 00:08:04.199 25.00000% : 11796.480us 00:08:04.199 50.00000% : 12703.902us 00:08:04.199 75.00000% : 13712.148us 00:08:04.199 90.00000% : 15325.342us 00:08:04.199 95.00000% : 17140.185us 00:08:04.199 98.00000% : 18854.203us 00:08:04.199 99.00000% : 19963.274us 00:08:04.199 99.50000% : 24702.031us 00:08:04.199 99.90000% : 25609.452us 00:08:04.199 99.99000% : 26012.751us 00:08:04.199 99.99900% : 26012.751us 00:08:04.199 99.99990% : 26012.751us 00:08:04.199 99.99999% : 26012.751us 00:08:04.199 00:08:04.199 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:04.199 ================================================================================= 00:08:04.199 1.00000% : 10132.874us 00:08:04.199 10.00000% : 11090.708us 00:08:04.199 25.00000% : 11796.480us 00:08:04.199 50.00000% : 12653.489us 00:08:04.199 75.00000% : 13611.323us 00:08:04.199 90.00000% : 15526.991us 00:08:04.199 95.00000% : 17241.009us 00:08:04.199 98.00000% : 18753.378us 00:08:04.199 99.00000% : 19761.625us 00:08:04.199 99.50000% : 23996.258us 00:08:04.199 99.90000% : 24903.680us 00:08:04.199 99.99000% : 25105.329us 00:08:04.199 99.99900% : 25105.329us 00:08:04.199 99.99990% : 25105.329us 00:08:04.199 99.99999% : 25105.329us 00:08:04.199 00:08:04.200 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:04.200 ================================================================================= 00:08:04.200 1.00000% : 10183.286us 00:08:04.200 10.00000% : 11241.945us 00:08:04.200 25.00000% : 11846.892us 00:08:04.200 50.00000% : 12603.077us 00:08:04.200 75.00000% : 13611.323us 00:08:04.200 90.00000% : 15526.991us 00:08:04.200 95.00000% : 16535.237us 00:08:04.200 98.00000% : 18652.554us 00:08:04.200 99.00000% : 19459.151us 00:08:04.200 99.50000% : 24903.680us 00:08:04.200 99.90000% : 25811.102us 00:08:04.200 99.99000% : 26012.751us 00:08:04.200 99.99900% : 26012.751us 00:08:04.200 99.99990% : 26012.751us 00:08:04.200 99.99999% : 26012.751us 00:08:04.200 00:08:04.200 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:04.200 ================================================================================= 00:08:04.200 1.00000% : 9981.637us 00:08:04.200 10.00000% : 11241.945us 00:08:04.200 25.00000% : 11897.305us 00:08:04.200 50.00000% : 12603.077us 00:08:04.200 75.00000% : 13611.323us 00:08:04.200 90.00000% : 15627.815us 00:08:04.200 95.00000% : 16938.535us 00:08:04.200 98.00000% : 18551.729us 00:08:04.200 99.00000% : 19358.326us 00:08:04.200 99.50000% : 24500.382us 00:08:04.200 99.90000% : 25407.803us 00:08:04.200 99.99000% : 25508.628us 00:08:04.200 99.99900% : 25508.628us 00:08:04.200 99.99990% : 25508.628us 00:08:04.200 99.99999% : 25508.628us 00:08:04.200 00:08:04.200 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:04.200 ================================================================================= 00:08:04.200 1.00000% : 9326.277us 00:08:04.200 10.00000% : 11141.120us 00:08:04.200 25.00000% : 11796.480us 00:08:04.200 50.00000% : 12603.077us 00:08:04.200 75.00000% : 13611.323us 00:08:04.200 90.00000% : 15526.991us 00:08:04.200 95.00000% : 16837.711us 00:08:04.200 98.00000% : 18652.554us 00:08:04.200 99.00000% : 19459.151us 00:08:04.200 99.50000% : 24802.855us 00:08:04.200 99.90000% : 25407.803us 00:08:04.200 99.99000% : 25609.452us 00:08:04.200 99.99900% : 25609.452us 00:08:04.200 99.99990% : 25609.452us 00:08:04.200 99.99999% : 25609.452us 00:08:04.200 00:08:04.200 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:04.200 ============================================================================== 00:08:04.200 Range in us Cumulative IO count 00:08:04.200 8822.154 - 8872.566: 0.0406% ( 4) 00:08:04.200 8872.566 - 8922.978: 0.1015% ( 6) 00:08:04.200 8922.978 - 8973.391: 0.1826% ( 8) 00:08:04.200 8973.391 - 9023.803: 0.3551% ( 17) 00:08:04.200 9023.803 - 9074.215: 0.4769% ( 12) 00:08:04.200 9074.215 - 9124.628: 0.5377% ( 6) 00:08:04.200 9124.628 - 9175.040: 0.5783% ( 4) 00:08:04.200 9175.040 - 9225.452: 0.6088% ( 3) 00:08:04.200 9225.452 - 9275.865: 0.6494% ( 4) 00:08:04.200 9830.400 - 9880.812: 0.6798% ( 3) 00:08:04.200 9880.812 - 9931.225: 0.7102% ( 3) 00:08:04.200 9931.225 - 9981.637: 0.7508% ( 4) 00:08:04.200 9981.637 - 10032.049: 0.8015% ( 5) 00:08:04.200 10032.049 - 10082.462: 0.8421% ( 4) 00:08:04.200 10082.462 - 10132.874: 0.9030% ( 6) 00:08:04.200 10132.874 - 10183.286: 1.0045% ( 10) 00:08:04.200 10183.286 - 10233.698: 1.1972% ( 19) 00:08:04.200 10233.698 - 10284.111: 1.4509% ( 25) 00:08:04.200 10284.111 - 10334.523: 1.7350% ( 28) 00:08:04.200 10334.523 - 10384.935: 2.0901% ( 35) 00:08:04.200 10384.935 - 10435.348: 2.5264% ( 43) 00:08:04.200 10435.348 - 10485.760: 2.8815% ( 35) 00:08:04.200 10485.760 - 10536.172: 3.2366% ( 35) 00:08:04.200 10536.172 - 10586.585: 3.6627% ( 42) 00:08:04.200 10586.585 - 10636.997: 4.2918% ( 62) 00:08:04.200 10636.997 - 10687.409: 4.7788% ( 48) 00:08:04.200 10687.409 - 10737.822: 5.1644% ( 38) 00:08:04.200 10737.822 - 10788.234: 5.5804% ( 41) 00:08:04.200 10788.234 - 10838.646: 6.1282% ( 54) 00:08:04.200 10838.646 - 10889.058: 6.6964% ( 56) 00:08:04.200 10889.058 - 10939.471: 7.2950% ( 59) 00:08:04.200 10939.471 - 10989.883: 7.9140% ( 61) 00:08:04.200 10989.883 - 11040.295: 8.6851% ( 76) 00:08:04.200 11040.295 - 11090.708: 9.4765% ( 78) 00:08:04.200 11090.708 - 11141.120: 10.2476% ( 76) 00:08:04.200 11141.120 - 11191.532: 11.2114% ( 95) 00:08:04.200 11191.532 - 11241.945: 12.5406% ( 131) 00:08:04.200 11241.945 - 11292.357: 13.7480% ( 119) 00:08:04.200 11292.357 - 11342.769: 14.7220% ( 96) 00:08:04.200 11342.769 - 11393.182: 15.8076% ( 107) 00:08:04.200 11393.182 - 11443.594: 16.9034% ( 108) 00:08:04.200 11443.594 - 11494.006: 18.3442% ( 142) 00:08:04.200 11494.006 - 11544.418: 19.7139% ( 135) 00:08:04.200 11544.418 - 11594.831: 20.6169% ( 89) 00:08:04.200 11594.831 - 11645.243: 21.7127% ( 108) 00:08:04.200 11645.243 - 11695.655: 22.6055% ( 88) 00:08:04.200 11695.655 - 11746.068: 23.9144% ( 129) 00:08:04.200 11746.068 - 11796.480: 25.1522% ( 122) 00:08:04.200 11796.480 - 11846.892: 26.6741% ( 150) 00:08:04.200 11846.892 - 11897.305: 28.2873% ( 159) 00:08:04.200 11897.305 - 11947.717: 29.6875% ( 138) 00:08:04.200 11947.717 - 11998.129: 31.0978% ( 139) 00:08:04.200 11998.129 - 12048.542: 32.9748% ( 185) 00:08:04.200 12048.542 - 12098.954: 34.4359% ( 144) 00:08:04.200 12098.954 - 12149.366: 35.9984% ( 154) 00:08:04.200 12149.366 - 12199.778: 37.3681% ( 135) 00:08:04.200 12199.778 - 12250.191: 38.5045% ( 112) 00:08:04.200 12250.191 - 12300.603: 39.7930% ( 127) 00:08:04.200 12300.603 - 12351.015: 41.4062% ( 159) 00:08:04.200 12351.015 - 12401.428: 42.9485% ( 152) 00:08:04.200 12401.428 - 12451.840: 44.4907% ( 152) 00:08:04.200 12451.840 - 12502.252: 45.9821% ( 147) 00:08:04.200 12502.252 - 12552.665: 47.3011% ( 130) 00:08:04.200 12552.665 - 12603.077: 48.7013% ( 138) 00:08:04.200 12603.077 - 12653.489: 50.5986% ( 187) 00:08:04.200 12653.489 - 12703.902: 52.5670% ( 194) 00:08:04.200 12703.902 - 12754.314: 54.1599% ( 157) 00:08:04.200 12754.314 - 12804.726: 55.7021% ( 152) 00:08:04.200 12804.726 - 12855.138: 57.3965% ( 167) 00:08:04.200 12855.138 - 12905.551: 58.8474% ( 143) 00:08:04.200 12905.551 - 13006.375: 62.3377% ( 344) 00:08:04.200 13006.375 - 13107.200: 65.3612% ( 298) 00:08:04.200 13107.200 - 13208.025: 67.9688% ( 257) 00:08:04.200 13208.025 - 13308.849: 69.7849% ( 179) 00:08:04.200 13308.849 - 13409.674: 71.4692% ( 166) 00:08:04.200 13409.674 - 13510.498: 72.8084% ( 132) 00:08:04.200 13510.498 - 13611.323: 74.1883% ( 136) 00:08:04.200 13611.323 - 13712.148: 75.4261% ( 122) 00:08:04.200 13712.148 - 13812.972: 76.8669% ( 142) 00:08:04.200 13812.972 - 13913.797: 78.4801% ( 159) 00:08:04.200 13913.797 - 14014.622: 80.0426% ( 154) 00:08:04.200 14014.622 - 14115.446: 81.1080% ( 105) 00:08:04.200 14115.446 - 14216.271: 82.2646% ( 114) 00:08:04.200 14216.271 - 14317.095: 83.4010% ( 112) 00:08:04.200 14317.095 - 14417.920: 84.3243% ( 91) 00:08:04.200 14417.920 - 14518.745: 84.9736% ( 64) 00:08:04.200 14518.745 - 14619.569: 85.5418% ( 56) 00:08:04.200 14619.569 - 14720.394: 85.9882% ( 44) 00:08:04.200 14720.394 - 14821.218: 86.3839% ( 39) 00:08:04.200 14821.218 - 14922.043: 86.8202% ( 43) 00:08:04.200 14922.043 - 15022.868: 87.2159% ( 39) 00:08:04.200 15022.868 - 15123.692: 87.7942% ( 57) 00:08:04.200 15123.692 - 15224.517: 88.2914% ( 49) 00:08:04.200 15224.517 - 15325.342: 88.7683% ( 47) 00:08:04.200 15325.342 - 15426.166: 89.1944% ( 42) 00:08:04.200 15426.166 - 15526.991: 89.5800% ( 38) 00:08:04.200 15526.991 - 15627.815: 90.2293% ( 64) 00:08:04.200 15627.815 - 15728.640: 90.8076% ( 57) 00:08:04.200 15728.640 - 15829.465: 91.2744% ( 46) 00:08:04.200 15829.465 - 15930.289: 91.7106% ( 43) 00:08:04.200 15930.289 - 16031.114: 92.3295% ( 61) 00:08:04.200 16031.114 - 16131.938: 92.7658% ( 43) 00:08:04.200 16131.938 - 16232.763: 93.1615% ( 39) 00:08:04.200 16232.763 - 16333.588: 93.4862% ( 32) 00:08:04.200 16333.588 - 16434.412: 93.6891% ( 20) 00:08:04.200 16434.412 - 16535.237: 93.8515% ( 16) 00:08:04.200 16535.237 - 16636.062: 94.2877% ( 43) 00:08:04.200 16636.062 - 16736.886: 94.5211% ( 23) 00:08:04.200 16736.886 - 16837.711: 94.7748% ( 25) 00:08:04.200 16837.711 - 16938.535: 95.0690% ( 29) 00:08:04.200 16938.535 - 17039.360: 95.3531% ( 28) 00:08:04.200 17039.360 - 17140.185: 95.6575% ( 30) 00:08:04.200 17140.185 - 17241.009: 95.9213% ( 26) 00:08:04.200 17241.009 - 17341.834: 96.1343% ( 21) 00:08:04.200 17341.834 - 17442.658: 96.2459% ( 11) 00:08:04.200 17442.658 - 17543.483: 96.3474% ( 10) 00:08:04.200 17543.483 - 17644.308: 96.4489% ( 10) 00:08:04.200 17644.308 - 17745.132: 96.6315% ( 18) 00:08:04.200 17745.132 - 17845.957: 96.8243% ( 19) 00:08:04.200 17845.957 - 17946.782: 97.0170% ( 19) 00:08:04.200 17946.782 - 18047.606: 97.1692% ( 15) 00:08:04.200 18047.606 - 18148.431: 97.2707% ( 10) 00:08:04.200 18148.431 - 18249.255: 97.4026% ( 13) 00:08:04.200 18249.255 - 18350.080: 97.5244% ( 12) 00:08:04.200 18350.080 - 18450.905: 97.6461% ( 12) 00:08:04.200 18450.905 - 18551.729: 97.8592% ( 21) 00:08:04.200 18551.729 - 18652.554: 98.0519% ( 19) 00:08:04.200 18652.554 - 18753.378: 98.1230% ( 7) 00:08:04.200 18753.378 - 18854.203: 98.1940% ( 7) 00:08:04.200 18854.203 - 18955.028: 98.2752% ( 8) 00:08:04.200 18955.028 - 19055.852: 98.3157% ( 4) 00:08:04.200 19055.852 - 19156.677: 98.3969% ( 8) 00:08:04.200 19156.677 - 19257.502: 98.5390% ( 14) 00:08:04.200 19257.502 - 19358.326: 98.7216% ( 18) 00:08:04.200 19358.326 - 19459.151: 98.9144% ( 19) 00:08:04.200 19459.151 - 19559.975: 99.0463% ( 13) 00:08:04.200 19559.975 - 19660.800: 99.1274% ( 8) 00:08:04.200 19660.800 - 19761.625: 99.1883% ( 6) 00:08:04.201 19761.625 - 19862.449: 99.2188% ( 3) 00:08:04.201 19862.449 - 19963.274: 99.2593% ( 4) 00:08:04.201 19963.274 - 20064.098: 99.2898% ( 3) 00:08:04.201 20064.098 - 20164.923: 99.3304% ( 4) 00:08:04.201 20164.923 - 20265.748: 99.3506% ( 2) 00:08:04.201 24802.855 - 24903.680: 99.3709% ( 2) 00:08:04.201 24903.680 - 25004.505: 99.4014% ( 3) 00:08:04.201 25004.505 - 25105.329: 99.4521% ( 5) 00:08:04.201 25105.329 - 25206.154: 99.5231% ( 7) 00:08:04.201 25206.154 - 25306.978: 99.5942% ( 7) 00:08:04.201 25306.978 - 25407.803: 99.6753% ( 8) 00:08:04.201 25407.803 - 25508.628: 99.7869% ( 11) 00:08:04.201 25508.628 - 25609.452: 99.8377% ( 5) 00:08:04.201 25609.452 - 25710.277: 99.8884% ( 5) 00:08:04.201 25710.277 - 25811.102: 99.9391% ( 5) 00:08:04.201 25811.102 - 26012.751: 100.0000% ( 6) 00:08:04.201 00:08:04.201 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:04.201 ============================================================================== 00:08:04.201 Range in us Cumulative IO count 00:08:04.201 7763.495 - 7813.908: 0.0101% ( 1) 00:08:04.201 7813.908 - 7864.320: 0.0203% ( 1) 00:08:04.201 7864.320 - 7914.732: 0.1116% ( 9) 00:08:04.201 7914.732 - 7965.145: 0.2435% ( 13) 00:08:04.201 7965.145 - 8015.557: 0.3044% ( 6) 00:08:04.201 8015.557 - 8065.969: 0.3653% ( 6) 00:08:04.201 8065.969 - 8116.382: 0.4261% ( 6) 00:08:04.201 8116.382 - 8166.794: 0.4566% ( 3) 00:08:04.201 8217.206 - 8267.618: 0.4972% ( 4) 00:08:04.201 8267.618 - 8318.031: 0.5276% ( 3) 00:08:04.201 8318.031 - 8368.443: 0.5479% ( 2) 00:08:04.201 8368.443 - 8418.855: 0.5682% ( 2) 00:08:04.201 8418.855 - 8469.268: 0.6088% ( 4) 00:08:04.201 8469.268 - 8519.680: 0.6494% ( 4) 00:08:04.201 9729.575 - 9779.988: 0.6595% ( 1) 00:08:04.201 9779.988 - 9830.400: 0.6798% ( 2) 00:08:04.201 9830.400 - 9880.812: 0.7305% ( 5) 00:08:04.201 9880.812 - 9931.225: 0.8117% ( 8) 00:08:04.201 9931.225 - 9981.637: 0.8929% ( 8) 00:08:04.201 9981.637 - 10032.049: 1.0248% ( 13) 00:08:04.201 10032.049 - 10082.462: 1.2886% ( 26) 00:08:04.201 10082.462 - 10132.874: 1.5625% ( 27) 00:08:04.201 10132.874 - 10183.286: 1.8567% ( 29) 00:08:04.201 10183.286 - 10233.698: 2.1408% ( 28) 00:08:04.201 10233.698 - 10284.111: 2.5061% ( 36) 00:08:04.201 10284.111 - 10334.523: 2.8713% ( 36) 00:08:04.201 10334.523 - 10384.935: 3.1047% ( 23) 00:08:04.201 10384.935 - 10435.348: 3.2873% ( 18) 00:08:04.201 10435.348 - 10485.760: 3.6222% ( 33) 00:08:04.201 10485.760 - 10536.172: 3.8860% ( 26) 00:08:04.201 10536.172 - 10586.585: 4.1903% ( 30) 00:08:04.201 10586.585 - 10636.997: 4.6266% ( 43) 00:08:04.201 10636.997 - 10687.409: 5.0020% ( 37) 00:08:04.201 10687.409 - 10737.822: 5.7021% ( 69) 00:08:04.201 10737.822 - 10788.234: 6.3413% ( 63) 00:08:04.201 10788.234 - 10838.646: 6.8892% ( 54) 00:08:04.201 10838.646 - 10889.058: 7.3762% ( 48) 00:08:04.201 10889.058 - 10939.471: 8.1473% ( 76) 00:08:04.201 10939.471 - 10989.883: 8.9692% ( 81) 00:08:04.201 10989.883 - 11040.295: 9.9127% ( 93) 00:08:04.201 11040.295 - 11090.708: 10.8157% ( 89) 00:08:04.201 11090.708 - 11141.120: 11.8709% ( 104) 00:08:04.201 11141.120 - 11191.532: 12.9261% ( 104) 00:08:04.201 11191.532 - 11241.945: 13.8494% ( 91) 00:08:04.201 11241.945 - 11292.357: 14.8235% ( 96) 00:08:04.201 11292.357 - 11342.769: 15.8787% ( 104) 00:08:04.201 11342.769 - 11393.182: 17.0860% ( 119) 00:08:04.201 11393.182 - 11443.594: 18.2427% ( 114) 00:08:04.201 11443.594 - 11494.006: 19.0950% ( 84) 00:08:04.201 11494.006 - 11544.418: 20.0487% ( 94) 00:08:04.201 11544.418 - 11594.831: 20.9111% ( 85) 00:08:04.201 11594.831 - 11645.243: 21.9156% ( 99) 00:08:04.201 11645.243 - 11695.655: 23.2041% ( 127) 00:08:04.201 11695.655 - 11746.068: 24.2695% ( 105) 00:08:04.201 11746.068 - 11796.480: 25.5377% ( 125) 00:08:04.201 11796.480 - 11846.892: 26.9278% ( 137) 00:08:04.201 11846.892 - 11897.305: 28.3685% ( 142) 00:08:04.201 11897.305 - 11947.717: 29.6368% ( 125) 00:08:04.201 11947.717 - 11998.129: 31.0978% ( 144) 00:08:04.201 11998.129 - 12048.542: 32.7618% ( 164) 00:08:04.201 12048.542 - 12098.954: 34.2938% ( 151) 00:08:04.201 12098.954 - 12149.366: 35.5824% ( 127) 00:08:04.201 12149.366 - 12199.778: 36.7188% ( 112) 00:08:04.201 12199.778 - 12250.191: 38.0377% ( 130) 00:08:04.201 12250.191 - 12300.603: 39.3973% ( 134) 00:08:04.201 12300.603 - 12351.015: 40.9192% ( 150) 00:08:04.201 12351.015 - 12401.428: 42.2788% ( 134) 00:08:04.201 12401.428 - 12451.840: 43.4862% ( 119) 00:08:04.201 12451.840 - 12502.252: 44.9878% ( 148) 00:08:04.201 12502.252 - 12552.665: 46.1648% ( 116) 00:08:04.201 12552.665 - 12603.077: 47.4432% ( 126) 00:08:04.201 12603.077 - 12653.489: 48.7825% ( 132) 00:08:04.201 12653.489 - 12703.902: 50.3653% ( 156) 00:08:04.201 12703.902 - 12754.314: 51.7147% ( 133) 00:08:04.201 12754.314 - 12804.726: 53.0743% ( 134) 00:08:04.201 12804.726 - 12855.138: 54.8904% ( 179) 00:08:04.201 12855.138 - 12905.551: 56.5848% ( 167) 00:08:04.201 12905.551 - 13006.375: 59.4460% ( 282) 00:08:04.201 13006.375 - 13107.200: 62.7841% ( 329) 00:08:04.201 13107.200 - 13208.025: 65.2496% ( 243) 00:08:04.201 13208.025 - 13308.849: 67.3904% ( 211) 00:08:04.201 13308.849 - 13409.674: 69.6429% ( 222) 00:08:04.201 13409.674 - 13510.498: 71.9156% ( 224) 00:08:04.201 13510.498 - 13611.323: 74.0970% ( 215) 00:08:04.201 13611.323 - 13712.148: 75.6291% ( 151) 00:08:04.201 13712.148 - 13812.972: 77.3133% ( 166) 00:08:04.201 13812.972 - 13913.797: 78.3888% ( 106) 00:08:04.201 13913.797 - 14014.622: 79.5150% ( 111) 00:08:04.201 14014.622 - 14115.446: 80.7427% ( 121) 00:08:04.201 14115.446 - 14216.271: 81.8994% ( 114) 00:08:04.201 14216.271 - 14317.095: 83.1879% ( 127) 00:08:04.201 14317.095 - 14417.920: 84.1924% ( 99) 00:08:04.201 14417.920 - 14518.745: 85.2070% ( 100) 00:08:04.201 14518.745 - 14619.569: 85.9578% ( 74) 00:08:04.201 14619.569 - 14720.394: 86.4042% ( 44) 00:08:04.201 14720.394 - 14821.218: 87.0637% ( 65) 00:08:04.201 14821.218 - 14922.043: 87.8653% ( 79) 00:08:04.201 14922.043 - 15022.868: 88.4943% ( 62) 00:08:04.201 15022.868 - 15123.692: 89.1538% ( 65) 00:08:04.201 15123.692 - 15224.517: 89.7930% ( 63) 00:08:04.201 15224.517 - 15325.342: 90.3003% ( 50) 00:08:04.201 15325.342 - 15426.166: 90.8279% ( 52) 00:08:04.201 15426.166 - 15526.991: 91.2744% ( 44) 00:08:04.201 15526.991 - 15627.815: 91.6599% ( 38) 00:08:04.201 15627.815 - 15728.640: 92.0556% ( 39) 00:08:04.201 15728.640 - 15829.465: 92.4614% ( 40) 00:08:04.201 15829.465 - 15930.289: 92.8673% ( 40) 00:08:04.201 15930.289 - 16031.114: 93.1717% ( 30) 00:08:04.201 16031.114 - 16131.938: 93.4253% ( 25) 00:08:04.201 16131.938 - 16232.763: 93.5877% ( 16) 00:08:04.201 16232.763 - 16333.588: 93.7804% ( 19) 00:08:04.201 16333.588 - 16434.412: 93.8920% ( 11) 00:08:04.201 16434.412 - 16535.237: 94.1051% ( 21) 00:08:04.201 16535.237 - 16636.062: 94.2167% ( 11) 00:08:04.201 16636.062 - 16736.886: 94.3689% ( 15) 00:08:04.201 16736.886 - 16837.711: 94.5617% ( 19) 00:08:04.201 16837.711 - 16938.535: 94.8356% ( 27) 00:08:04.201 16938.535 - 17039.360: 94.9878% ( 15) 00:08:04.201 17039.360 - 17140.185: 95.0791% ( 9) 00:08:04.201 17140.185 - 17241.009: 95.2719% ( 19) 00:08:04.201 17241.009 - 17341.834: 95.5357% ( 26) 00:08:04.201 17341.834 - 17442.658: 95.7285% ( 19) 00:08:04.201 17442.658 - 17543.483: 95.9720% ( 24) 00:08:04.201 17543.483 - 17644.308: 96.2358% ( 26) 00:08:04.201 17644.308 - 17745.132: 96.4996% ( 26) 00:08:04.201 17745.132 - 17845.957: 96.7330% ( 23) 00:08:04.201 17845.957 - 17946.782: 96.9257% ( 19) 00:08:04.201 17946.782 - 18047.606: 97.0475% ( 12) 00:08:04.201 18047.606 - 18148.431: 97.2098% ( 16) 00:08:04.201 18148.431 - 18249.255: 97.3620% ( 15) 00:08:04.201 18249.255 - 18350.080: 97.4939% ( 13) 00:08:04.201 18350.080 - 18450.905: 97.6664% ( 17) 00:08:04.201 18450.905 - 18551.729: 97.7374% ( 7) 00:08:04.201 18551.729 - 18652.554: 97.7983% ( 6) 00:08:04.201 18652.554 - 18753.378: 97.9403% ( 14) 00:08:04.201 18753.378 - 18854.203: 98.0012% ( 6) 00:08:04.201 18854.203 - 18955.028: 98.0824% ( 8) 00:08:04.201 18955.028 - 19055.852: 98.1838% ( 10) 00:08:04.201 19055.852 - 19156.677: 98.2752% ( 9) 00:08:04.201 19156.677 - 19257.502: 98.3665% ( 9) 00:08:04.201 19257.502 - 19358.326: 98.4679% ( 10) 00:08:04.201 19358.326 - 19459.151: 98.6709% ( 20) 00:08:04.201 19459.151 - 19559.975: 98.7419% ( 7) 00:08:04.201 19559.975 - 19660.800: 98.8129% ( 7) 00:08:04.201 19660.800 - 19761.625: 98.9144% ( 10) 00:08:04.201 19761.625 - 19862.449: 98.9752% ( 6) 00:08:04.201 19862.449 - 19963.274: 99.0767% ( 10) 00:08:04.201 19963.274 - 20064.098: 99.1477% ( 7) 00:08:04.201 20064.098 - 20164.923: 99.2390% ( 9) 00:08:04.201 20164.923 - 20265.748: 99.3405% ( 10) 00:08:04.201 20265.748 - 20366.572: 99.3506% ( 1) 00:08:04.201 24298.732 - 24399.557: 99.3709% ( 2) 00:08:04.201 24399.557 - 24500.382: 99.4217% ( 5) 00:08:04.201 24500.382 - 24601.206: 99.4623% ( 4) 00:08:04.201 24601.206 - 24702.031: 99.5130% ( 5) 00:08:04.201 24702.031 - 24802.855: 99.5536% ( 4) 00:08:04.201 24802.855 - 24903.680: 99.6043% ( 5) 00:08:04.201 24903.680 - 25004.505: 99.6550% ( 5) 00:08:04.201 25004.505 - 25105.329: 99.6956% ( 4) 00:08:04.201 25105.329 - 25206.154: 99.7362% ( 4) 00:08:04.201 25206.154 - 25306.978: 99.7869% ( 5) 00:08:04.201 25306.978 - 25407.803: 99.8275% ( 4) 00:08:04.201 25407.803 - 25508.628: 99.8580% ( 3) 00:08:04.201 25508.628 - 25609.452: 99.9087% ( 5) 00:08:04.201 25609.452 - 25710.277: 99.9493% ( 4) 00:08:04.202 25710.277 - 25811.102: 99.9899% ( 4) 00:08:04.202 25811.102 - 26012.751: 100.0000% ( 1) 00:08:04.202 00:08:04.202 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:04.202 ============================================================================== 00:08:04.202 Range in us Cumulative IO count 00:08:04.202 7612.258 - 7662.671: 0.0101% ( 1) 00:08:04.202 7662.671 - 7713.083: 0.0710% ( 6) 00:08:04.202 7713.083 - 7763.495: 0.2131% ( 14) 00:08:04.202 7763.495 - 7813.908: 0.4058% ( 19) 00:08:04.202 7813.908 - 7864.320: 0.4464% ( 4) 00:08:04.202 7864.320 - 7914.732: 0.4769% ( 3) 00:08:04.202 7914.732 - 7965.145: 0.5073% ( 3) 00:08:04.202 7965.145 - 8015.557: 0.5479% ( 4) 00:08:04.202 8015.557 - 8065.969: 0.5885% ( 4) 00:08:04.202 8065.969 - 8116.382: 0.6189% ( 3) 00:08:04.202 8116.382 - 8166.794: 0.6494% ( 3) 00:08:04.202 9628.751 - 9679.163: 0.6595% ( 1) 00:08:04.202 9729.575 - 9779.988: 0.6696% ( 1) 00:08:04.202 9779.988 - 9830.400: 0.6798% ( 1) 00:08:04.202 9830.400 - 9880.812: 0.6899% ( 1) 00:08:04.202 9880.812 - 9931.225: 0.7407% ( 5) 00:08:04.202 9931.225 - 9981.637: 0.7711% ( 3) 00:08:04.202 9981.637 - 10032.049: 0.8421% ( 7) 00:08:04.202 10032.049 - 10082.462: 0.9334% ( 9) 00:08:04.202 10082.462 - 10132.874: 1.1161% ( 18) 00:08:04.202 10132.874 - 10183.286: 1.3291% ( 21) 00:08:04.202 10183.286 - 10233.698: 1.5828% ( 25) 00:08:04.202 10233.698 - 10284.111: 1.8263% ( 24) 00:08:04.202 10284.111 - 10334.523: 2.1510% ( 32) 00:08:04.202 10334.523 - 10384.935: 2.5365% ( 38) 00:08:04.202 10384.935 - 10435.348: 3.0134% ( 47) 00:08:04.202 10435.348 - 10485.760: 3.3482% ( 33) 00:08:04.202 10485.760 - 10536.172: 3.7946% ( 44) 00:08:04.202 10536.172 - 10586.585: 4.3933% ( 59) 00:08:04.202 10586.585 - 10636.997: 4.9310% ( 53) 00:08:04.202 10636.997 - 10687.409: 5.3369% ( 40) 00:08:04.202 10687.409 - 10737.822: 5.8340% ( 49) 00:08:04.202 10737.822 - 10788.234: 6.1993% ( 36) 00:08:04.202 10788.234 - 10838.646: 6.6153% ( 41) 00:08:04.202 10838.646 - 10889.058: 7.0414% ( 42) 00:08:04.202 10889.058 - 10939.471: 7.6502% ( 60) 00:08:04.202 10939.471 - 10989.883: 8.4517% ( 79) 00:08:04.202 10989.883 - 11040.295: 9.1924% ( 73) 00:08:04.202 11040.295 - 11090.708: 10.1461% ( 94) 00:08:04.202 11090.708 - 11141.120: 11.4550% ( 129) 00:08:04.202 11141.120 - 11191.532: 12.4696% ( 100) 00:08:04.202 11191.532 - 11241.945: 13.5248% ( 104) 00:08:04.202 11241.945 - 11292.357: 14.7829% ( 124) 00:08:04.202 11292.357 - 11342.769: 16.0714% ( 127) 00:08:04.202 11342.769 - 11393.182: 16.9846% ( 90) 00:08:04.202 11393.182 - 11443.594: 17.9079% ( 91) 00:08:04.202 11443.594 - 11494.006: 18.7906% ( 87) 00:08:04.202 11494.006 - 11544.418: 19.7139% ( 91) 00:08:04.202 11544.418 - 11594.831: 20.9416% ( 121) 00:08:04.202 11594.831 - 11645.243: 21.8953% ( 94) 00:08:04.202 11645.243 - 11695.655: 22.8693% ( 96) 00:08:04.202 11695.655 - 11746.068: 24.1274% ( 124) 00:08:04.202 11746.068 - 11796.480: 25.0812% ( 94) 00:08:04.202 11796.480 - 11846.892: 26.1465% ( 105) 00:08:04.202 11846.892 - 11897.305: 27.3640% ( 120) 00:08:04.202 11897.305 - 11947.717: 28.4395% ( 106) 00:08:04.202 11947.717 - 11998.129: 29.5049% ( 105) 00:08:04.202 11998.129 - 12048.542: 30.8239% ( 130) 00:08:04.202 12048.542 - 12098.954: 32.2037% ( 136) 00:08:04.202 12098.954 - 12149.366: 33.8373% ( 161) 00:08:04.202 12149.366 - 12199.778: 35.3998% ( 154) 00:08:04.202 12199.778 - 12250.191: 36.8506% ( 143) 00:08:04.202 12250.191 - 12300.603: 38.1494% ( 128) 00:08:04.202 12300.603 - 12351.015: 39.6205% ( 145) 00:08:04.202 12351.015 - 12401.428: 41.4062% ( 176) 00:08:04.202 12401.428 - 12451.840: 43.1615% ( 173) 00:08:04.202 12451.840 - 12502.252: 45.1603% ( 197) 00:08:04.202 12502.252 - 12552.665: 47.0576% ( 187) 00:08:04.202 12552.665 - 12603.077: 48.8231% ( 174) 00:08:04.202 12603.077 - 12653.489: 50.6291% ( 178) 00:08:04.202 12653.489 - 12703.902: 52.2829% ( 163) 00:08:04.202 12703.902 - 12754.314: 53.7642% ( 146) 00:08:04.202 12754.314 - 12804.726: 55.1745% ( 139) 00:08:04.202 12804.726 - 12855.138: 56.5747% ( 138) 00:08:04.202 12855.138 - 12905.551: 58.0966% ( 150) 00:08:04.202 12905.551 - 13006.375: 60.6940% ( 256) 00:08:04.202 13006.375 - 13107.200: 62.8247% ( 210) 00:08:04.202 13107.200 - 13208.025: 64.8133% ( 196) 00:08:04.202 13208.025 - 13308.849: 67.1672% ( 232) 00:08:04.202 13308.849 - 13409.674: 69.4602% ( 226) 00:08:04.202 13409.674 - 13510.498: 72.1388% ( 264) 00:08:04.202 13510.498 - 13611.323: 75.0812% ( 290) 00:08:04.202 13611.323 - 13712.148: 77.2423% ( 213) 00:08:04.202 13712.148 - 13812.972: 78.9570% ( 169) 00:08:04.202 13812.972 - 13913.797: 80.4992% ( 152) 00:08:04.202 13913.797 - 14014.622: 81.6356% ( 112) 00:08:04.202 14014.622 - 14115.446: 82.4878% ( 84) 00:08:04.202 14115.446 - 14216.271: 83.3604% ( 86) 00:08:04.202 14216.271 - 14317.095: 84.2837% ( 91) 00:08:04.202 14317.095 - 14417.920: 85.0751% ( 78) 00:08:04.202 14417.920 - 14518.745: 85.7752% ( 69) 00:08:04.202 14518.745 - 14619.569: 86.3535% ( 57) 00:08:04.202 14619.569 - 14720.394: 86.8304% ( 47) 00:08:04.202 14720.394 - 14821.218: 87.3072% ( 47) 00:08:04.202 14821.218 - 14922.043: 87.8145% ( 50) 00:08:04.202 14922.043 - 15022.868: 88.3726% ( 55) 00:08:04.202 15022.868 - 15123.692: 88.7886% ( 41) 00:08:04.202 15123.692 - 15224.517: 89.1843% ( 39) 00:08:04.202 15224.517 - 15325.342: 89.5800% ( 39) 00:08:04.202 15325.342 - 15426.166: 89.9655% ( 38) 00:08:04.202 15426.166 - 15526.991: 90.5235% ( 55) 00:08:04.202 15526.991 - 15627.815: 91.2236% ( 69) 00:08:04.202 15627.815 - 15728.640: 91.7005% ( 47) 00:08:04.202 15728.640 - 15829.465: 92.2179% ( 51) 00:08:04.202 15829.465 - 15930.289: 92.6948% ( 47) 00:08:04.202 15930.289 - 16031.114: 92.9789% ( 28) 00:08:04.202 16031.114 - 16131.938: 93.3847% ( 40) 00:08:04.202 16131.938 - 16232.763: 93.7094% ( 32) 00:08:04.202 16232.763 - 16333.588: 94.0138% ( 30) 00:08:04.202 16333.588 - 16434.412: 94.2066% ( 19) 00:08:04.202 16434.412 - 16535.237: 94.4095% ( 20) 00:08:04.202 16535.237 - 16636.062: 94.6429% ( 23) 00:08:04.202 16636.062 - 16736.886: 94.7240% ( 8) 00:08:04.202 16736.886 - 16837.711: 94.7849% ( 6) 00:08:04.202 16837.711 - 16938.535: 94.8153% ( 3) 00:08:04.202 16938.535 - 17039.360: 94.8255% ( 1) 00:08:04.202 17039.360 - 17140.185: 94.9269% ( 10) 00:08:04.202 17140.185 - 17241.009: 95.0081% ( 8) 00:08:04.202 17241.009 - 17341.834: 95.3226% ( 31) 00:08:04.202 17341.834 - 17442.658: 95.4140% ( 9) 00:08:04.202 17442.658 - 17543.483: 95.5053% ( 9) 00:08:04.202 17543.483 - 17644.308: 95.7082% ( 20) 00:08:04.202 17644.308 - 17745.132: 95.8807% ( 17) 00:08:04.202 17745.132 - 17845.957: 96.0329% ( 15) 00:08:04.202 17845.957 - 17946.782: 96.2358% ( 20) 00:08:04.202 17946.782 - 18047.606: 96.5300% ( 29) 00:08:04.202 18047.606 - 18148.431: 96.8649% ( 33) 00:08:04.202 18148.431 - 18249.255: 97.3214% ( 45) 00:08:04.202 18249.255 - 18350.080: 97.6055% ( 28) 00:08:04.202 18350.080 - 18450.905: 97.7780% ( 17) 00:08:04.202 18450.905 - 18551.729: 97.8896% ( 11) 00:08:04.202 18551.729 - 18652.554: 97.9911% ( 10) 00:08:04.202 18652.554 - 18753.378: 98.0925% ( 10) 00:08:04.202 18753.378 - 18854.203: 98.2244% ( 13) 00:08:04.202 18854.203 - 18955.028: 98.3462% ( 12) 00:08:04.202 18955.028 - 19055.852: 98.4679% ( 12) 00:08:04.202 19055.852 - 19156.677: 98.5491% ( 8) 00:08:04.202 19156.677 - 19257.502: 98.6404% ( 9) 00:08:04.202 19257.502 - 19358.326: 98.7419% ( 10) 00:08:04.202 19358.326 - 19459.151: 98.8332% ( 9) 00:08:04.202 19459.151 - 19559.975: 98.9144% ( 8) 00:08:04.202 19559.975 - 19660.800: 98.9955% ( 8) 00:08:04.202 19660.800 - 19761.625: 99.0564% ( 6) 00:08:04.202 19761.625 - 19862.449: 99.0970% ( 4) 00:08:04.202 19862.449 - 19963.274: 99.1376% ( 4) 00:08:04.202 19963.274 - 20064.098: 99.1782% ( 4) 00:08:04.202 20064.098 - 20164.923: 99.2188% ( 4) 00:08:04.202 20164.923 - 20265.748: 99.2593% ( 4) 00:08:04.202 20265.748 - 20366.572: 99.3101% ( 5) 00:08:04.202 20366.572 - 20467.397: 99.3506% ( 4) 00:08:04.202 23693.785 - 23794.609: 99.4014% ( 5) 00:08:04.202 23794.609 - 23895.434: 99.4521% ( 5) 00:08:04.202 23895.434 - 23996.258: 99.5028% ( 5) 00:08:04.202 23996.258 - 24097.083: 99.5739% ( 7) 00:08:04.202 24097.083 - 24197.908: 99.5942% ( 2) 00:08:04.202 24298.732 - 24399.557: 99.6347% ( 4) 00:08:04.202 24399.557 - 24500.382: 99.6855% ( 5) 00:08:04.202 24500.382 - 24601.206: 99.7463% ( 6) 00:08:04.202 24601.206 - 24702.031: 99.8072% ( 6) 00:08:04.202 24702.031 - 24802.855: 99.8884% ( 8) 00:08:04.202 24802.855 - 24903.680: 99.9290% ( 4) 00:08:04.202 24903.680 - 25004.505: 99.9696% ( 4) 00:08:04.202 25004.505 - 25105.329: 100.0000% ( 3) 00:08:04.202 00:08:04.202 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:04.202 ============================================================================== 00:08:04.202 Range in us Cumulative IO count 00:08:04.202 6452.775 - 6503.188: 0.0710% ( 7) 00:08:04.202 6503.188 - 6553.600: 0.2942% ( 22) 00:08:04.202 6553.600 - 6604.012: 0.4363% ( 14) 00:08:04.202 6604.012 - 6654.425: 0.4769% ( 4) 00:08:04.202 6654.425 - 6704.837: 0.5175% ( 4) 00:08:04.202 6704.837 - 6755.249: 0.5479% ( 3) 00:08:04.202 6755.249 - 6805.662: 0.5885% ( 4) 00:08:04.202 6805.662 - 6856.074: 0.6291% ( 4) 00:08:04.202 6856.074 - 6906.486: 0.6494% ( 2) 00:08:04.202 9779.988 - 9830.400: 0.6595% ( 1) 00:08:04.202 9981.637 - 10032.049: 0.6899% ( 3) 00:08:04.202 10032.049 - 10082.462: 0.7711% ( 8) 00:08:04.203 10082.462 - 10132.874: 0.8523% ( 8) 00:08:04.203 10132.874 - 10183.286: 1.0248% ( 17) 00:08:04.203 10183.286 - 10233.698: 1.1972% ( 17) 00:08:04.203 10233.698 - 10284.111: 1.4103% ( 21) 00:08:04.203 10284.111 - 10334.523: 1.5929% ( 18) 00:08:04.203 10334.523 - 10384.935: 1.8364% ( 24) 00:08:04.203 10384.935 - 10435.348: 2.1205% ( 28) 00:08:04.203 10435.348 - 10485.760: 2.4249% ( 30) 00:08:04.203 10485.760 - 10536.172: 2.7293% ( 30) 00:08:04.203 10536.172 - 10586.585: 3.1250% ( 39) 00:08:04.203 10586.585 - 10636.997: 3.6729% ( 54) 00:08:04.203 10636.997 - 10687.409: 4.3933% ( 71) 00:08:04.203 10687.409 - 10737.822: 4.8295% ( 43) 00:08:04.203 10737.822 - 10788.234: 5.3064% ( 47) 00:08:04.203 10788.234 - 10838.646: 5.9253% ( 61) 00:08:04.203 10838.646 - 10889.058: 6.3109% ( 38) 00:08:04.203 10889.058 - 10939.471: 6.7472% ( 43) 00:08:04.203 10939.471 - 10989.883: 7.2849% ( 53) 00:08:04.203 10989.883 - 11040.295: 7.8226% ( 53) 00:08:04.203 11040.295 - 11090.708: 8.2894% ( 46) 00:08:04.203 11090.708 - 11141.120: 9.0909% ( 79) 00:08:04.203 11141.120 - 11191.532: 9.7200% ( 62) 00:08:04.203 11191.532 - 11241.945: 10.3084% ( 58) 00:08:04.203 11241.945 - 11292.357: 10.9882% ( 67) 00:08:04.203 11292.357 - 11342.769: 11.9217% ( 92) 00:08:04.203 11342.769 - 11393.182: 13.0175% ( 108) 00:08:04.203 11393.182 - 11443.594: 14.0726% ( 104) 00:08:04.203 11443.594 - 11494.006: 15.1989% ( 111) 00:08:04.203 11494.006 - 11544.418: 16.5077% ( 129) 00:08:04.203 11544.418 - 11594.831: 17.8267% ( 130) 00:08:04.203 11594.831 - 11645.243: 19.3486% ( 150) 00:08:04.203 11645.243 - 11695.655: 21.0227% ( 165) 00:08:04.203 11695.655 - 11746.068: 22.5954% ( 155) 00:08:04.203 11746.068 - 11796.480: 24.2289% ( 161) 00:08:04.203 11796.480 - 11846.892: 25.7102% ( 146) 00:08:04.203 11846.892 - 11897.305: 27.3336% ( 160) 00:08:04.203 11897.305 - 11947.717: 29.0889% ( 173) 00:08:04.203 11947.717 - 11998.129: 30.9761% ( 186) 00:08:04.203 11998.129 - 12048.542: 32.8937% ( 189) 00:08:04.203 12048.542 - 12098.954: 34.6591% ( 174) 00:08:04.203 12098.954 - 12149.366: 35.9679% ( 129) 00:08:04.203 12149.366 - 12199.778: 37.4391% ( 145) 00:08:04.203 12199.778 - 12250.191: 38.9407% ( 148) 00:08:04.203 12250.191 - 12300.603: 40.4424% ( 148) 00:08:04.203 12300.603 - 12351.015: 41.9541% ( 149) 00:08:04.203 12351.015 - 12401.428: 43.8920% ( 191) 00:08:04.203 12401.428 - 12451.840: 45.7589% ( 184) 00:08:04.203 12451.840 - 12502.252: 47.4026% ( 162) 00:08:04.203 12502.252 - 12552.665: 49.0564% ( 163) 00:08:04.203 12552.665 - 12603.077: 50.8218% ( 174) 00:08:04.203 12603.077 - 12653.489: 52.2727% ( 143) 00:08:04.203 12653.489 - 12703.902: 53.7642% ( 147) 00:08:04.203 12703.902 - 12754.314: 55.5702% ( 178) 00:08:04.203 12754.314 - 12804.726: 57.2443% ( 165) 00:08:04.203 12804.726 - 12855.138: 58.5227% ( 126) 00:08:04.203 12855.138 - 12905.551: 59.5576% ( 102) 00:08:04.203 12905.551 - 13006.375: 61.6985% ( 211) 00:08:04.203 13006.375 - 13107.200: 64.4481% ( 271) 00:08:04.203 13107.200 - 13208.025: 67.5629% ( 307) 00:08:04.203 13208.025 - 13308.849: 70.0386% ( 244) 00:08:04.203 13308.849 - 13409.674: 71.9562% ( 189) 00:08:04.203 13409.674 - 13510.498: 73.9651% ( 198) 00:08:04.203 13510.498 - 13611.323: 75.9030% ( 191) 00:08:04.203 13611.323 - 13712.148: 77.5873% ( 166) 00:08:04.203 13712.148 - 13812.972: 78.8961% ( 129) 00:08:04.203 13812.972 - 13913.797: 80.4282% ( 151) 00:08:04.203 13913.797 - 14014.622: 81.2500% ( 81) 00:08:04.203 14014.622 - 14115.446: 82.1530% ( 89) 00:08:04.203 14115.446 - 14216.271: 83.1067% ( 94) 00:08:04.203 14216.271 - 14317.095: 83.8778% ( 76) 00:08:04.203 14317.095 - 14417.920: 84.4460% ( 56) 00:08:04.203 14417.920 - 14518.745: 84.9939% ( 54) 00:08:04.203 14518.745 - 14619.569: 85.4809% ( 48) 00:08:04.203 14619.569 - 14720.394: 85.8969% ( 41) 00:08:04.203 14720.394 - 14821.218: 86.4245% ( 52) 00:08:04.203 14821.218 - 14922.043: 87.0231% ( 59) 00:08:04.203 14922.043 - 15022.868: 87.3782% ( 35) 00:08:04.203 15022.868 - 15123.692: 87.7537% ( 37) 00:08:04.203 15123.692 - 15224.517: 88.3523% ( 59) 00:08:04.203 15224.517 - 15325.342: 89.3060% ( 94) 00:08:04.203 15325.342 - 15426.166: 89.9756% ( 66) 00:08:04.203 15426.166 - 15526.991: 90.4627% ( 48) 00:08:04.203 15526.991 - 15627.815: 91.2642% ( 79) 00:08:04.203 15627.815 - 15728.640: 91.7005% ( 43) 00:08:04.203 15728.640 - 15829.465: 91.9744% ( 27) 00:08:04.203 15829.465 - 15930.289: 92.3397% ( 36) 00:08:04.203 15930.289 - 16031.114: 92.7050% ( 36) 00:08:04.203 16031.114 - 16131.938: 93.1717% ( 46) 00:08:04.203 16131.938 - 16232.763: 93.6080% ( 43) 00:08:04.203 16232.763 - 16333.588: 94.1660% ( 55) 00:08:04.203 16333.588 - 16434.412: 94.5515% ( 38) 00:08:04.203 16434.412 - 16535.237: 95.0284% ( 47) 00:08:04.203 16535.237 - 16636.062: 95.2618% ( 23) 00:08:04.203 16636.062 - 16736.886: 95.4140% ( 15) 00:08:04.203 16736.886 - 16837.711: 95.5560% ( 14) 00:08:04.203 16837.711 - 16938.535: 95.6676% ( 11) 00:08:04.203 16938.535 - 17039.360: 95.7792% ( 11) 00:08:04.203 17039.360 - 17140.185: 95.8807% ( 10) 00:08:04.203 17140.185 - 17241.009: 95.9619% ( 8) 00:08:04.203 17241.009 - 17341.834: 96.0024% ( 4) 00:08:04.203 17341.834 - 17442.658: 96.0329% ( 3) 00:08:04.203 17442.658 - 17543.483: 96.1140% ( 8) 00:08:04.203 17543.483 - 17644.308: 96.2662% ( 15) 00:08:04.203 17644.308 - 17745.132: 96.4489% ( 18) 00:08:04.203 17745.132 - 17845.957: 96.7431% ( 29) 00:08:04.203 17845.957 - 17946.782: 96.9257% ( 18) 00:08:04.203 17946.782 - 18047.606: 97.0982% ( 17) 00:08:04.203 18047.606 - 18148.431: 97.2504% ( 15) 00:08:04.203 18148.431 - 18249.255: 97.3823% ( 13) 00:08:04.203 18249.255 - 18350.080: 97.5345% ( 15) 00:08:04.203 18350.080 - 18450.905: 97.6765% ( 14) 00:08:04.203 18450.905 - 18551.729: 97.8389% ( 16) 00:08:04.203 18551.729 - 18652.554: 98.0317% ( 19) 00:08:04.203 18652.554 - 18753.378: 98.2650% ( 23) 00:08:04.203 18753.378 - 18854.203: 98.4375% ( 17) 00:08:04.203 18854.203 - 18955.028: 98.5998% ( 16) 00:08:04.203 18955.028 - 19055.852: 98.7317% ( 13) 00:08:04.203 19055.852 - 19156.677: 98.8535% ( 12) 00:08:04.203 19156.677 - 19257.502: 98.9347% ( 8) 00:08:04.203 19257.502 - 19358.326: 98.9752% ( 4) 00:08:04.203 19358.326 - 19459.151: 99.0057% ( 3) 00:08:04.203 19459.151 - 19559.975: 99.0463% ( 4) 00:08:04.203 19559.975 - 19660.800: 99.0869% ( 4) 00:08:04.203 19660.800 - 19761.625: 99.1274% ( 4) 00:08:04.203 19761.625 - 19862.449: 99.1680% ( 4) 00:08:04.203 19862.449 - 19963.274: 99.2086% ( 4) 00:08:04.203 19963.274 - 20064.098: 99.2390% ( 3) 00:08:04.203 20064.098 - 20164.923: 99.2796% ( 4) 00:08:04.203 20164.923 - 20265.748: 99.3101% ( 3) 00:08:04.203 20265.748 - 20366.572: 99.3405% ( 3) 00:08:04.203 20366.572 - 20467.397: 99.3506% ( 1) 00:08:04.203 24298.732 - 24399.557: 99.3912% ( 4) 00:08:04.203 24399.557 - 24500.382: 99.4318% ( 4) 00:08:04.203 24500.382 - 24601.206: 99.4724% ( 4) 00:08:04.203 24802.855 - 24903.680: 99.5028% ( 3) 00:08:04.203 24903.680 - 25004.505: 99.5434% ( 4) 00:08:04.203 25004.505 - 25105.329: 99.5942% ( 5) 00:08:04.203 25105.329 - 25206.154: 99.6347% ( 4) 00:08:04.203 25206.154 - 25306.978: 99.6753% ( 4) 00:08:04.203 25306.978 - 25407.803: 99.7159% ( 4) 00:08:04.203 25407.803 - 25508.628: 99.7565% ( 4) 00:08:04.203 25508.628 - 25609.452: 99.8072% ( 5) 00:08:04.203 25609.452 - 25710.277: 99.8478% ( 4) 00:08:04.203 25710.277 - 25811.102: 99.9087% ( 6) 00:08:04.203 25811.102 - 26012.751: 100.0000% ( 9) 00:08:04.203 00:08:04.203 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:04.203 ============================================================================== 00:08:04.203 Range in us Cumulative IO count 00:08:04.203 5066.437 - 5091.643: 0.0101% ( 1) 00:08:04.203 5091.643 - 5116.849: 0.0304% ( 2) 00:08:04.203 5116.849 - 5142.055: 0.0609% ( 3) 00:08:04.203 5142.055 - 5167.262: 0.0710% ( 1) 00:08:04.203 5167.262 - 5192.468: 0.1116% ( 4) 00:08:04.203 5192.468 - 5217.674: 0.1218% ( 1) 00:08:04.203 5217.674 - 5242.880: 0.1319% ( 1) 00:08:04.203 5242.880 - 5268.086: 0.1522% ( 2) 00:08:04.203 5268.086 - 5293.292: 0.1725% ( 2) 00:08:04.203 5293.292 - 5318.498: 0.2334% ( 6) 00:08:04.203 5318.498 - 5343.705: 0.2841% ( 5) 00:08:04.203 5343.705 - 5368.911: 0.3044% ( 2) 00:08:04.203 5368.911 - 5394.117: 0.3145% ( 1) 00:08:04.203 5394.117 - 5419.323: 0.3348% ( 2) 00:08:04.203 5419.323 - 5444.529: 0.3653% ( 3) 00:08:04.203 5444.529 - 5469.735: 0.3754% ( 1) 00:08:04.203 5469.735 - 5494.942: 0.3856% ( 1) 00:08:04.203 5494.942 - 5520.148: 0.3957% ( 1) 00:08:04.203 5520.148 - 5545.354: 0.4058% ( 1) 00:08:04.203 5545.354 - 5570.560: 0.4261% ( 2) 00:08:04.203 5570.560 - 5595.766: 0.4363% ( 1) 00:08:04.203 5595.766 - 5620.972: 0.4566% ( 2) 00:08:04.203 5620.972 - 5646.178: 0.4667% ( 1) 00:08:04.203 5646.178 - 5671.385: 0.4870% ( 2) 00:08:04.203 5671.385 - 5696.591: 0.4972% ( 1) 00:08:04.203 5696.591 - 5721.797: 0.5175% ( 2) 00:08:04.203 5721.797 - 5747.003: 0.5276% ( 1) 00:08:04.203 5747.003 - 5772.209: 0.5377% ( 1) 00:08:04.203 5772.209 - 5797.415: 0.5580% ( 2) 00:08:04.203 5797.415 - 5822.622: 0.5682% ( 1) 00:08:04.203 5822.622 - 5847.828: 0.5885% ( 2) 00:08:04.203 5847.828 - 5873.034: 0.6088% ( 2) 00:08:04.203 5873.034 - 5898.240: 0.6189% ( 1) 00:08:04.203 5898.240 - 5923.446: 0.6392% ( 2) 00:08:04.203 5948.652 - 5973.858: 0.6494% ( 1) 00:08:04.203 9628.751 - 9679.163: 0.6595% ( 1) 00:08:04.204 9679.163 - 9729.575: 0.7001% ( 4) 00:08:04.204 9729.575 - 9779.988: 0.7407% ( 4) 00:08:04.204 9779.988 - 9830.400: 0.7812% ( 4) 00:08:04.204 9830.400 - 9880.812: 0.8523% ( 7) 00:08:04.204 9880.812 - 9931.225: 0.9537% ( 10) 00:08:04.204 9931.225 - 9981.637: 1.0755% ( 12) 00:08:04.204 9981.637 - 10032.049: 1.1567% ( 8) 00:08:04.204 10032.049 - 10082.462: 1.2378% ( 8) 00:08:04.204 10082.462 - 10132.874: 1.2683% ( 3) 00:08:04.204 10132.874 - 10183.286: 1.3291% ( 6) 00:08:04.204 10183.286 - 10233.698: 1.4813% ( 15) 00:08:04.204 10233.698 - 10284.111: 1.6437% ( 16) 00:08:04.204 10284.111 - 10334.523: 1.9379% ( 29) 00:08:04.204 10334.523 - 10384.935: 2.2321% ( 29) 00:08:04.204 10384.935 - 10435.348: 2.8612% ( 62) 00:08:04.204 10435.348 - 10485.760: 3.2670% ( 40) 00:08:04.204 10485.760 - 10536.172: 3.6019% ( 33) 00:08:04.204 10536.172 - 10586.585: 3.8352% ( 23) 00:08:04.204 10586.585 - 10636.997: 4.0889% ( 25) 00:08:04.204 10636.997 - 10687.409: 4.3222% ( 23) 00:08:04.204 10687.409 - 10737.822: 4.5759% ( 25) 00:08:04.204 10737.822 - 10788.234: 5.1238% ( 54) 00:08:04.204 10788.234 - 10838.646: 5.4079% ( 28) 00:08:04.204 10838.646 - 10889.058: 5.7934% ( 38) 00:08:04.204 10889.058 - 10939.471: 6.3819% ( 58) 00:08:04.204 10939.471 - 10989.883: 6.9704% ( 58) 00:08:04.204 10989.883 - 11040.295: 7.4878% ( 51) 00:08:04.204 11040.295 - 11090.708: 8.1270% ( 63) 00:08:04.204 11090.708 - 11141.120: 8.7865% ( 65) 00:08:04.204 11141.120 - 11191.532: 9.7910% ( 99) 00:08:04.204 11191.532 - 11241.945: 10.6230% ( 82) 00:08:04.204 11241.945 - 11292.357: 11.3941% ( 76) 00:08:04.204 11292.357 - 11342.769: 12.2463% ( 84) 00:08:04.204 11342.769 - 11393.182: 13.1595% ( 90) 00:08:04.204 11393.182 - 11443.594: 14.1741% ( 100) 00:08:04.204 11443.594 - 11494.006: 15.1481% ( 96) 00:08:04.204 11494.006 - 11544.418: 16.0816% ( 92) 00:08:04.204 11544.418 - 11594.831: 17.3295% ( 123) 00:08:04.204 11594.831 - 11645.243: 18.3340% ( 99) 00:08:04.204 11645.243 - 11695.655: 19.6124% ( 126) 00:08:04.204 11695.655 - 11746.068: 21.2764% ( 164) 00:08:04.204 11746.068 - 11796.480: 22.8084% ( 151) 00:08:04.204 11796.480 - 11846.892: 24.5130% ( 168) 00:08:04.204 11846.892 - 11897.305: 26.5422% ( 200) 00:08:04.204 11897.305 - 11947.717: 28.5714% ( 200) 00:08:04.204 11947.717 - 11998.129: 30.7325% ( 213) 00:08:04.204 11998.129 - 12048.542: 33.0763% ( 231) 00:08:04.204 12048.542 - 12098.954: 35.4911% ( 238) 00:08:04.204 12098.954 - 12149.366: 37.6826% ( 216) 00:08:04.204 12149.366 - 12199.778: 39.2350% ( 153) 00:08:04.204 12199.778 - 12250.191: 40.8989% ( 164) 00:08:04.204 12250.191 - 12300.603: 42.4513% ( 153) 00:08:04.204 12300.603 - 12351.015: 44.3283% ( 185) 00:08:04.204 12351.015 - 12401.428: 45.7488% ( 140) 00:08:04.204 12401.428 - 12451.840: 47.2200% ( 145) 00:08:04.204 12451.840 - 12502.252: 48.4375% ( 120) 00:08:04.204 12502.252 - 12552.665: 49.9188% ( 146) 00:08:04.204 12552.665 - 12603.077: 51.2987% ( 136) 00:08:04.204 12603.077 - 12653.489: 52.9830% ( 166) 00:08:04.204 12653.489 - 12703.902: 54.6571% ( 165) 00:08:04.204 12703.902 - 12754.314: 56.0674% ( 139) 00:08:04.204 12754.314 - 12804.726: 57.5791% ( 149) 00:08:04.204 12804.726 - 12855.138: 59.0808% ( 148) 00:08:04.204 12855.138 - 12905.551: 60.6534% ( 155) 00:08:04.204 12905.551 - 13006.375: 63.7683% ( 307) 00:08:04.204 13006.375 - 13107.200: 66.7005% ( 289) 00:08:04.204 13107.200 - 13208.025: 69.2472% ( 251) 00:08:04.204 13208.025 - 13308.849: 71.4387% ( 216) 00:08:04.204 13308.849 - 13409.674: 73.2244% ( 176) 00:08:04.204 13409.674 - 13510.498: 74.7565% ( 151) 00:08:04.204 13510.498 - 13611.323: 75.7204% ( 95) 00:08:04.204 13611.323 - 13712.148: 76.7451% ( 101) 00:08:04.204 13712.148 - 13812.972: 77.8003% ( 104) 00:08:04.204 13812.972 - 13913.797: 78.9164% ( 110) 00:08:04.204 13913.797 - 14014.622: 79.8498% ( 92) 00:08:04.204 14014.622 - 14115.446: 81.0065% ( 114) 00:08:04.204 14115.446 - 14216.271: 82.0515% ( 103) 00:08:04.204 14216.271 - 14317.095: 83.0256% ( 96) 00:08:04.204 14317.095 - 14417.920: 83.9894% ( 95) 00:08:04.204 14417.920 - 14518.745: 84.6084% ( 61) 00:08:04.204 14518.745 - 14619.569: 85.1461% ( 53) 00:08:04.204 14619.569 - 14720.394: 85.6940% ( 54) 00:08:04.204 14720.394 - 14821.218: 86.2825% ( 58) 00:08:04.204 14821.218 - 14922.043: 86.9420% ( 65) 00:08:04.204 14922.043 - 15022.868: 87.6928% ( 74) 00:08:04.204 15022.868 - 15123.692: 88.4030% ( 70) 00:08:04.204 15123.692 - 15224.517: 88.8900% ( 48) 00:08:04.204 15224.517 - 15325.342: 89.2756% ( 38) 00:08:04.204 15325.342 - 15426.166: 89.5597% ( 28) 00:08:04.204 15426.166 - 15526.991: 89.8133% ( 25) 00:08:04.204 15526.991 - 15627.815: 90.1989% ( 38) 00:08:04.204 15627.815 - 15728.640: 90.5337% ( 33) 00:08:04.204 15728.640 - 15829.465: 91.1526% ( 61) 00:08:04.204 15829.465 - 15930.289: 91.6700% ( 51) 00:08:04.204 15930.289 - 16031.114: 92.1875% ( 51) 00:08:04.204 16031.114 - 16131.938: 92.6441% ( 45) 00:08:04.204 16131.938 - 16232.763: 93.0398% ( 39) 00:08:04.204 16232.763 - 16333.588: 93.4558% ( 41) 00:08:04.204 16333.588 - 16434.412: 93.8413% ( 38) 00:08:04.204 16434.412 - 16535.237: 94.1254% ( 28) 00:08:04.204 16535.237 - 16636.062: 94.4399% ( 31) 00:08:04.204 16636.062 - 16736.886: 94.6733% ( 23) 00:08:04.204 16736.886 - 16837.711: 94.9371% ( 26) 00:08:04.204 16837.711 - 16938.535: 95.2821% ( 34) 00:08:04.204 16938.535 - 17039.360: 95.5357% ( 25) 00:08:04.204 17039.360 - 17140.185: 95.7386% ( 20) 00:08:04.204 17140.185 - 17241.009: 95.8604% ( 12) 00:08:04.204 17241.009 - 17341.834: 95.9416% ( 8) 00:08:04.204 17341.834 - 17442.658: 96.0735% ( 13) 00:08:04.204 17442.658 - 17543.483: 96.1648% ( 9) 00:08:04.204 17543.483 - 17644.308: 96.2561% ( 9) 00:08:04.204 17644.308 - 17745.132: 96.3271% ( 7) 00:08:04.204 17745.132 - 17845.957: 96.5402% ( 21) 00:08:04.204 17845.957 - 17946.782: 96.7837% ( 24) 00:08:04.204 17946.782 - 18047.606: 97.0272% ( 24) 00:08:04.204 18047.606 - 18148.431: 97.2910% ( 26) 00:08:04.204 18148.431 - 18249.255: 97.5345% ( 24) 00:08:04.204 18249.255 - 18350.080: 97.7577% ( 22) 00:08:04.204 18350.080 - 18450.905: 97.9911% ( 23) 00:08:04.204 18450.905 - 18551.729: 98.1940% ( 20) 00:08:04.204 18551.729 - 18652.554: 98.3665% ( 17) 00:08:04.204 18652.554 - 18753.378: 98.5897% ( 22) 00:08:04.204 18753.378 - 18854.203: 98.7723% ( 18) 00:08:04.204 18854.203 - 18955.028: 98.8738% ( 10) 00:08:04.204 18955.028 - 19055.852: 98.9347% ( 6) 00:08:04.204 19055.852 - 19156.677: 98.9651% ( 3) 00:08:04.204 19156.677 - 19257.502: 98.9854% ( 2) 00:08:04.204 19257.502 - 19358.326: 99.0158% ( 3) 00:08:04.204 19358.326 - 19459.151: 99.0361% ( 2) 00:08:04.204 19459.151 - 19559.975: 99.0666% ( 3) 00:08:04.204 19559.975 - 19660.800: 99.0970% ( 3) 00:08:04.204 19660.800 - 19761.625: 99.1376% ( 4) 00:08:04.204 19761.625 - 19862.449: 99.1680% ( 3) 00:08:04.204 19862.449 - 19963.274: 99.1985% ( 3) 00:08:04.204 19963.274 - 20064.098: 99.2390% ( 4) 00:08:04.204 20064.098 - 20164.923: 99.2695% ( 3) 00:08:04.204 20164.923 - 20265.748: 99.2999% ( 3) 00:08:04.204 20265.748 - 20366.572: 99.3405% ( 4) 00:08:04.204 20366.572 - 20467.397: 99.3506% ( 1) 00:08:04.204 24197.908 - 24298.732: 99.3608% ( 1) 00:08:04.204 24298.732 - 24399.557: 99.3912% ( 3) 00:08:04.204 24399.557 - 24500.382: 99.5028% ( 11) 00:08:04.204 24500.382 - 24601.206: 99.6043% ( 10) 00:08:04.204 24802.855 - 24903.680: 99.6347% ( 3) 00:08:04.204 24903.680 - 25004.505: 99.6855% ( 5) 00:08:04.204 25004.505 - 25105.329: 99.7565% ( 7) 00:08:04.204 25105.329 - 25206.154: 99.8377% ( 8) 00:08:04.204 25206.154 - 25306.978: 99.8884% ( 5) 00:08:04.204 25306.978 - 25407.803: 99.9493% ( 6) 00:08:04.204 25407.803 - 25508.628: 100.0000% ( 5) 00:08:04.204 00:08:04.204 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:04.204 ============================================================================== 00:08:04.204 Range in us Cumulative IO count 00:08:04.204 4184.222 - 4209.428: 0.0203% ( 2) 00:08:04.204 4209.428 - 4234.634: 0.0710% ( 5) 00:08:04.204 4234.634 - 4259.840: 0.1116% ( 4) 00:08:04.204 4259.840 - 4285.046: 0.1725% ( 6) 00:08:04.204 4285.046 - 4310.252: 0.2232% ( 5) 00:08:04.204 4310.252 - 4335.458: 0.2334% ( 1) 00:08:04.204 4335.458 - 4360.665: 0.2537% ( 2) 00:08:04.204 4360.665 - 4385.871: 0.2638% ( 1) 00:08:04.205 4385.871 - 4411.077: 0.2739% ( 1) 00:08:04.205 4411.077 - 4436.283: 0.2942% ( 2) 00:08:04.205 4436.283 - 4461.489: 0.3044% ( 1) 00:08:04.205 4461.489 - 4486.695: 0.3145% ( 1) 00:08:04.205 4486.695 - 4511.902: 0.3247% ( 1) 00:08:04.205 4511.902 - 4537.108: 0.3348% ( 1) 00:08:04.205 4537.108 - 4562.314: 0.3450% ( 1) 00:08:04.205 4562.314 - 4587.520: 0.3551% ( 1) 00:08:04.205 4587.520 - 4612.726: 0.3653% ( 1) 00:08:04.205 4663.138 - 4688.345: 0.3754% ( 1) 00:08:04.205 4688.345 - 4713.551: 0.3856% ( 1) 00:08:04.205 4713.551 - 4738.757: 0.3957% ( 1) 00:08:04.205 4738.757 - 4763.963: 0.4058% ( 1) 00:08:04.205 4763.963 - 4789.169: 0.4160% ( 1) 00:08:04.205 4789.169 - 4814.375: 0.4261% ( 1) 00:08:04.205 4814.375 - 4839.582: 0.4363% ( 1) 00:08:04.205 4839.582 - 4864.788: 0.4464% ( 1) 00:08:04.205 4864.788 - 4889.994: 0.4566% ( 1) 00:08:04.205 4889.994 - 4915.200: 0.4667% ( 1) 00:08:04.205 4915.200 - 4940.406: 0.4769% ( 1) 00:08:04.205 4940.406 - 4965.612: 0.4870% ( 1) 00:08:04.205 4965.612 - 4990.818: 0.4972% ( 1) 00:08:04.205 4990.818 - 5016.025: 0.5073% ( 1) 00:08:04.205 5016.025 - 5041.231: 0.5175% ( 1) 00:08:04.205 5041.231 - 5066.437: 0.5377% ( 2) 00:08:04.205 5066.437 - 5091.643: 0.5479% ( 1) 00:08:04.205 5091.643 - 5116.849: 0.5580% ( 1) 00:08:04.205 5116.849 - 5142.055: 0.5682% ( 1) 00:08:04.205 5142.055 - 5167.262: 0.5783% ( 1) 00:08:04.205 5167.262 - 5192.468: 0.5885% ( 1) 00:08:04.205 5192.468 - 5217.674: 0.5986% ( 1) 00:08:04.205 5217.674 - 5242.880: 0.6088% ( 1) 00:08:04.205 5242.880 - 5268.086: 0.6291% ( 2) 00:08:04.205 5268.086 - 5293.292: 0.6392% ( 1) 00:08:04.205 5293.292 - 5318.498: 0.6494% ( 1) 00:08:04.205 9023.803 - 9074.215: 0.6595% ( 1) 00:08:04.205 9124.628 - 9175.040: 0.6899% ( 3) 00:08:04.205 9175.040 - 9225.452: 0.8117% ( 12) 00:08:04.205 9225.452 - 9275.865: 0.9740% ( 16) 00:08:04.205 9275.865 - 9326.277: 1.0552% ( 8) 00:08:04.205 9326.277 - 9376.689: 1.0958% ( 4) 00:08:04.205 9376.689 - 9427.102: 1.1364% ( 4) 00:08:04.205 9427.102 - 9477.514: 1.1668% ( 3) 00:08:04.205 9477.514 - 9527.926: 1.2074% ( 4) 00:08:04.205 9527.926 - 9578.338: 1.2480% ( 4) 00:08:04.205 9578.338 - 9628.751: 1.2886% ( 4) 00:08:04.205 9628.751 - 9679.163: 1.2987% ( 1) 00:08:04.205 9830.400 - 9880.812: 1.3088% ( 1) 00:08:04.205 9981.637 - 10032.049: 1.3291% ( 2) 00:08:04.205 10032.049 - 10082.462: 1.4407% ( 11) 00:08:04.205 10082.462 - 10132.874: 1.5625% ( 12) 00:08:04.205 10132.874 - 10183.286: 1.7756% ( 21) 00:08:04.205 10183.286 - 10233.698: 2.0191% ( 24) 00:08:04.205 10233.698 - 10284.111: 2.3032% ( 28) 00:08:04.205 10284.111 - 10334.523: 2.7090% ( 40) 00:08:04.205 10334.523 - 10384.935: 2.9627% ( 25) 00:08:04.205 10384.935 - 10435.348: 3.1554% ( 19) 00:08:04.205 10435.348 - 10485.760: 3.3685% ( 21) 00:08:04.205 10485.760 - 10536.172: 3.6019% ( 23) 00:08:04.205 10536.172 - 10586.585: 3.8251% ( 22) 00:08:04.205 10586.585 - 10636.997: 4.1092% ( 28) 00:08:04.205 10636.997 - 10687.409: 4.4541% ( 34) 00:08:04.205 10687.409 - 10737.822: 5.1339% ( 67) 00:08:04.205 10737.822 - 10788.234: 5.5804% ( 44) 00:08:04.205 10788.234 - 10838.646: 6.0166% ( 43) 00:08:04.205 10838.646 - 10889.058: 6.6863% ( 66) 00:08:04.205 10889.058 - 10939.471: 7.2342% ( 54) 00:08:04.205 10939.471 - 10989.883: 7.8328% ( 59) 00:08:04.205 10989.883 - 11040.295: 8.5633% ( 72) 00:08:04.205 11040.295 - 11090.708: 9.2127% ( 64) 00:08:04.205 11090.708 - 11141.120: 10.0041% ( 78) 00:08:04.205 11141.120 - 11191.532: 10.9679% ( 95) 00:08:04.205 11191.532 - 11241.945: 11.9115% ( 93) 00:08:04.205 11241.945 - 11292.357: 12.8754% ( 95) 00:08:04.205 11292.357 - 11342.769: 13.9915% ( 110) 00:08:04.205 11342.769 - 11393.182: 14.9858% ( 98) 00:08:04.205 11393.182 - 11443.594: 16.1729% ( 117) 00:08:04.205 11443.594 - 11494.006: 17.5731% ( 138) 00:08:04.205 11494.006 - 11544.418: 18.8210% ( 123) 00:08:04.205 11544.418 - 11594.831: 20.1502% ( 131) 00:08:04.205 11594.831 - 11645.243: 21.7127% ( 154) 00:08:04.205 11645.243 - 11695.655: 23.0722% ( 134) 00:08:04.205 11695.655 - 11746.068: 24.3304% ( 124) 00:08:04.205 11746.068 - 11796.480: 25.9233% ( 157) 00:08:04.205 11796.480 - 11846.892: 26.9886% ( 105) 00:08:04.205 11846.892 - 11897.305: 28.1554% ( 115) 00:08:04.205 11897.305 - 11947.717: 29.3831% ( 121) 00:08:04.205 11947.717 - 11998.129: 30.7224% ( 132) 00:08:04.205 11998.129 - 12048.542: 32.1834% ( 144) 00:08:04.205 12048.542 - 12098.954: 33.6749% ( 147) 00:08:04.205 12098.954 - 12149.366: 35.5317% ( 183) 00:08:04.205 12149.366 - 12199.778: 36.9420% ( 139) 00:08:04.205 12199.778 - 12250.191: 38.6059% ( 164) 00:08:04.205 12250.191 - 12300.603: 40.1989% ( 157) 00:08:04.205 12300.603 - 12351.015: 41.9947% ( 177) 00:08:04.205 12351.015 - 12401.428: 43.6080% ( 159) 00:08:04.205 12401.428 - 12451.840: 45.6372% ( 200) 00:08:04.205 12451.840 - 12502.252: 47.1794% ( 152) 00:08:04.205 12502.252 - 12552.665: 48.8738% ( 167) 00:08:04.205 12552.665 - 12603.077: 50.7204% ( 182) 00:08:04.205 12603.077 - 12653.489: 52.4452% ( 170) 00:08:04.205 12653.489 - 12703.902: 54.0483% ( 158) 00:08:04.205 12703.902 - 12754.314: 55.7021% ( 163) 00:08:04.205 12754.314 - 12804.726: 57.0008% ( 128) 00:08:04.205 12804.726 - 12855.138: 58.5329% ( 151) 00:08:04.205 12855.138 - 12905.551: 60.1461% ( 159) 00:08:04.205 12905.551 - 13006.375: 62.4391% ( 226) 00:08:04.205 13006.375 - 13107.200: 65.3713% ( 289) 00:08:04.205 13107.200 - 13208.025: 67.4919% ( 209) 00:08:04.205 13208.025 - 13308.849: 69.6124% ( 209) 00:08:04.205 13308.849 - 13409.674: 72.1895% ( 254) 00:08:04.205 13409.674 - 13510.498: 73.7520% ( 154) 00:08:04.205 13510.498 - 13611.323: 75.0101% ( 124) 00:08:04.205 13611.323 - 13712.148: 76.4712% ( 144) 00:08:04.205 13712.148 - 13812.972: 77.5365% ( 105) 00:08:04.205 13812.972 - 13913.797: 78.6120% ( 106) 00:08:04.205 13913.797 - 14014.622: 79.7382% ( 111) 00:08:04.205 14014.622 - 14115.446: 80.9355% ( 118) 00:08:04.205 14115.446 - 14216.271: 82.1327% ( 118) 00:08:04.205 14216.271 - 14317.095: 83.2792% ( 113) 00:08:04.205 14317.095 - 14417.920: 84.3649% ( 107) 00:08:04.205 14417.920 - 14518.745: 85.2476% ( 87) 00:08:04.205 14518.745 - 14619.569: 85.9984% ( 74) 00:08:04.205 14619.569 - 14720.394: 86.5564% ( 55) 00:08:04.205 14720.394 - 14821.218: 87.0942% ( 53) 00:08:04.205 14821.218 - 14922.043: 87.6015% ( 50) 00:08:04.205 14922.043 - 15022.868: 88.1392% ( 53) 00:08:04.205 15022.868 - 15123.692: 88.6364% ( 49) 00:08:04.205 15123.692 - 15224.517: 88.9813% ( 34) 00:08:04.205 15224.517 - 15325.342: 89.3669% ( 38) 00:08:04.205 15325.342 - 15426.166: 89.7930% ( 42) 00:08:04.205 15426.166 - 15526.991: 90.0061% ( 21) 00:08:04.205 15526.991 - 15627.815: 90.2597% ( 25) 00:08:04.205 15627.815 - 15728.640: 90.5337% ( 27) 00:08:04.205 15728.640 - 15829.465: 90.7468% ( 21) 00:08:04.205 15829.465 - 15930.289: 91.0816% ( 33) 00:08:04.205 15930.289 - 16031.114: 91.4468% ( 36) 00:08:04.205 16031.114 - 16131.938: 92.0150% ( 56) 00:08:04.205 16131.938 - 16232.763: 92.4310% ( 41) 00:08:04.205 16232.763 - 16333.588: 92.7760% ( 34) 00:08:04.205 16333.588 - 16434.412: 93.1311% ( 35) 00:08:04.205 16434.412 - 16535.237: 93.6790% ( 54) 00:08:04.205 16535.237 - 16636.062: 94.2877% ( 60) 00:08:04.205 16636.062 - 16736.886: 94.7342% ( 44) 00:08:04.205 16736.886 - 16837.711: 95.0994% ( 36) 00:08:04.205 16837.711 - 16938.535: 95.4140% ( 31) 00:08:04.205 16938.535 - 17039.360: 95.6778% ( 26) 00:08:04.205 17039.360 - 17140.185: 95.8300% ( 15) 00:08:04.205 17140.185 - 17241.009: 96.0227% ( 19) 00:08:04.205 17241.009 - 17341.834: 96.2358% ( 21) 00:08:04.205 17341.834 - 17442.658: 96.3271% ( 9) 00:08:04.205 17442.658 - 17543.483: 96.4387% ( 11) 00:08:04.205 17543.483 - 17644.308: 96.5808% ( 14) 00:08:04.205 17644.308 - 17745.132: 96.6112% ( 3) 00:08:04.205 17745.132 - 17845.957: 96.6619% ( 5) 00:08:04.205 17845.957 - 17946.782: 96.7330% ( 7) 00:08:04.205 17946.782 - 18047.606: 96.8446% ( 11) 00:08:04.205 18047.606 - 18148.431: 96.9765% ( 13) 00:08:04.205 18148.431 - 18249.255: 97.1084% ( 13) 00:08:04.205 18249.255 - 18350.080: 97.4127% ( 30) 00:08:04.205 18350.080 - 18450.905: 97.6664% ( 25) 00:08:04.205 18450.905 - 18551.729: 97.8287% ( 16) 00:08:04.205 18551.729 - 18652.554: 98.0925% ( 26) 00:08:04.205 18652.554 - 18753.378: 98.2143% ( 12) 00:08:04.205 18753.378 - 18854.203: 98.3766% ( 16) 00:08:04.205 18854.203 - 18955.028: 98.5288% ( 15) 00:08:04.205 18955.028 - 19055.852: 98.6709% ( 14) 00:08:04.205 19055.852 - 19156.677: 98.7926% ( 12) 00:08:04.205 19156.677 - 19257.502: 98.8839% ( 9) 00:08:04.205 19257.502 - 19358.326: 98.9854% ( 10) 00:08:04.205 19358.326 - 19459.151: 99.0767% ( 9) 00:08:04.205 19459.151 - 19559.975: 99.1173% ( 4) 00:08:04.205 19559.975 - 19660.800: 99.1477% ( 3) 00:08:04.205 19660.800 - 19761.625: 99.1883% ( 4) 00:08:04.205 19761.625 - 19862.449: 99.2188% ( 3) 00:08:04.205 19862.449 - 19963.274: 99.2593% ( 4) 00:08:04.205 19963.274 - 20064.098: 99.2898% ( 3) 00:08:04.205 20064.098 - 20164.923: 99.3202% ( 3) 00:08:04.205 20164.923 - 20265.748: 99.3506% ( 3) 00:08:04.205 24500.382 - 24601.206: 99.3912% ( 4) 00:08:04.205 24601.206 - 24702.031: 99.4724% ( 8) 00:08:04.205 24702.031 - 24802.855: 99.6347% ( 16) 00:08:04.205 24802.855 - 24903.680: 99.7058% ( 7) 00:08:04.205 24903.680 - 25004.505: 99.7666% ( 6) 00:08:04.205 25004.505 - 25105.329: 99.7768% ( 1) 00:08:04.205 25105.329 - 25206.154: 99.8174% ( 4) 00:08:04.206 25206.154 - 25306.978: 99.8782% ( 6) 00:08:04.206 25306.978 - 25407.803: 99.9290% ( 5) 00:08:04.206 25407.803 - 25508.628: 99.9899% ( 6) 00:08:04.206 25508.628 - 25609.452: 100.0000% ( 1) 00:08:04.206 00:08:04.206 04:13:49 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:04.206 00:08:04.206 real 0m2.509s 00:08:04.206 user 0m2.168s 00:08:04.206 sys 0m0.222s 00:08:04.206 04:13:49 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.206 04:13:49 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:04.206 ************************************ 00:08:04.206 END TEST nvme_perf 00:08:04.206 ************************************ 00:08:04.206 04:13:49 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:04.206 04:13:49 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:04.206 04:13:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.206 04:13:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.206 ************************************ 00:08:04.206 START TEST nvme_hello_world 00:08:04.206 ************************************ 00:08:04.206 04:13:49 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:04.206 Initializing NVMe Controllers 00:08:04.206 Attached to 0000:00:13.0 00:08:04.206 Namespace ID: 1 size: 1GB 00:08:04.206 Attached to 0000:00:10.0 00:08:04.206 Namespace ID: 1 size: 6GB 00:08:04.206 Attached to 0000:00:11.0 00:08:04.206 Namespace ID: 1 size: 5GB 00:08:04.206 Attached to 0000:00:12.0 00:08:04.206 Namespace ID: 1 size: 4GB 00:08:04.206 Namespace ID: 2 size: 4GB 00:08:04.206 Namespace ID: 3 size: 4GB 00:08:04.206 Initialization complete. 00:08:04.206 INFO: using host memory buffer for IO 00:08:04.206 Hello world! 00:08:04.206 INFO: using host memory buffer for IO 00:08:04.206 Hello world! 00:08:04.206 INFO: using host memory buffer for IO 00:08:04.206 Hello world! 00:08:04.206 INFO: using host memory buffer for IO 00:08:04.206 Hello world! 00:08:04.206 INFO: using host memory buffer for IO 00:08:04.206 Hello world! 00:08:04.206 INFO: using host memory buffer for IO 00:08:04.206 Hello world! 00:08:04.206 00:08:04.206 real 0m0.218s 00:08:04.206 user 0m0.080s 00:08:04.206 sys 0m0.087s 00:08:04.206 04:13:49 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.206 04:13:49 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:04.206 ************************************ 00:08:04.206 END TEST nvme_hello_world 00:08:04.206 ************************************ 00:08:04.467 04:13:49 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:04.467 04:13:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.467 04:13:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.467 04:13:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.467 ************************************ 00:08:04.467 START TEST nvme_sgl 00:08:04.467 ************************************ 00:08:04.467 04:13:49 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:04.467 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:04.467 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:04.467 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:04.467 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:04.467 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:04.467 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:04.467 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:04.467 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:04.467 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:04.467 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:04.467 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:04.467 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:04.467 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:04.467 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:04.467 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:04.467 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:04.467 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:04.467 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:04.467 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:04.467 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:04.728 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:04.728 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:04.728 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:04.728 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:04.728 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:04.728 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:04.728 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:04.728 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:04.728 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:04.728 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:04.728 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:04.728 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:04.728 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:04.728 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:04.728 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:04.728 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:04.728 NVMe Readv/Writev Request test 00:08:04.728 Attached to 0000:00:13.0 00:08:04.728 Attached to 0000:00:10.0 00:08:04.728 Attached to 0000:00:11.0 00:08:04.728 Attached to 0000:00:12.0 00:08:04.728 0000:00:10.0: build_io_request_2 test passed 00:08:04.728 0000:00:10.0: build_io_request_4 test passed 00:08:04.728 0000:00:10.0: build_io_request_5 test passed 00:08:04.728 0000:00:10.0: build_io_request_6 test passed 00:08:04.728 0000:00:10.0: build_io_request_7 test passed 00:08:04.728 0000:00:10.0: build_io_request_10 test passed 00:08:04.728 0000:00:11.0: build_io_request_2 test passed 00:08:04.728 0000:00:11.0: build_io_request_4 test passed 00:08:04.728 0000:00:11.0: build_io_request_5 test passed 00:08:04.728 0000:00:11.0: build_io_request_6 test passed 00:08:04.728 0000:00:11.0: build_io_request_7 test passed 00:08:04.728 0000:00:11.0: build_io_request_10 test passed 00:08:04.728 Cleaning up... 00:08:04.728 00:08:04.728 real 0m0.292s 00:08:04.728 user 0m0.137s 00:08:04.728 sys 0m0.099s 00:08:04.728 04:13:50 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.728 04:13:50 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:04.728 ************************************ 00:08:04.728 END TEST nvme_sgl 00:08:04.728 ************************************ 00:08:04.728 04:13:50 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:04.728 04:13:50 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.728 04:13:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.728 04:13:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.728 ************************************ 00:08:04.728 START TEST nvme_e2edp 00:08:04.728 ************************************ 00:08:04.728 04:13:50 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:04.988 NVMe Write/Read with End-to-End data protection test 00:08:04.988 Attached to 0000:00:13.0 00:08:04.988 Attached to 0000:00:10.0 00:08:04.988 Attached to 0000:00:11.0 00:08:04.988 Attached to 0000:00:12.0 00:08:04.988 Cleaning up... 00:08:04.988 00:08:04.988 real 0m0.211s 00:08:04.988 user 0m0.075s 00:08:04.989 sys 0m0.093s 00:08:04.989 04:13:50 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.989 ************************************ 00:08:04.989 END TEST nvme_e2edp 00:08:04.989 04:13:50 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:04.989 ************************************ 00:08:04.989 04:13:50 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:04.989 04:13:50 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.989 04:13:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.989 04:13:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.989 ************************************ 00:08:04.989 START TEST nvme_reserve 00:08:04.989 ************************************ 00:08:04.989 04:13:50 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:05.249 ===================================================== 00:08:05.249 NVMe Controller at PCI bus 0, device 19, function 0 00:08:05.249 ===================================================== 00:08:05.249 Reservations: Not Supported 00:08:05.249 ===================================================== 00:08:05.249 NVMe Controller at PCI bus 0, device 16, function 0 00:08:05.249 ===================================================== 00:08:05.249 Reservations: Not Supported 00:08:05.249 ===================================================== 00:08:05.249 NVMe Controller at PCI bus 0, device 17, function 0 00:08:05.249 ===================================================== 00:08:05.249 Reservations: Not Supported 00:08:05.249 ===================================================== 00:08:05.249 NVMe Controller at PCI bus 0, device 18, function 0 00:08:05.249 ===================================================== 00:08:05.249 Reservations: Not Supported 00:08:05.249 Reservation test passed 00:08:05.249 00:08:05.249 real 0m0.210s 00:08:05.249 user 0m0.062s 00:08:05.249 sys 0m0.100s 00:08:05.249 ************************************ 00:08:05.249 END TEST nvme_reserve 00:08:05.249 ************************************ 00:08:05.249 04:13:50 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.249 04:13:50 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:05.249 04:13:50 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:05.249 04:13:50 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.249 04:13:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.249 04:13:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.249 ************************************ 00:08:05.249 START TEST nvme_err_injection 00:08:05.249 ************************************ 00:08:05.249 04:13:50 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:05.510 NVMe Error Injection test 00:08:05.510 Attached to 0000:00:13.0 00:08:05.510 Attached to 0000:00:10.0 00:08:05.510 Attached to 0000:00:11.0 00:08:05.510 Attached to 0000:00:12.0 00:08:05.510 0000:00:13.0: get features failed as expected 00:08:05.510 0000:00:10.0: get features failed as expected 00:08:05.510 0000:00:11.0: get features failed as expected 00:08:05.510 0000:00:12.0: get features failed as expected 00:08:05.510 0000:00:13.0: get features successfully as expected 00:08:05.510 0000:00:10.0: get features successfully as expected 00:08:05.510 0000:00:11.0: get features successfully as expected 00:08:05.510 0000:00:12.0: get features successfully as expected 00:08:05.510 0000:00:12.0: read failed as expected 00:08:05.510 0000:00:13.0: read failed as expected 00:08:05.510 0000:00:10.0: read failed as expected 00:08:05.510 0000:00:11.0: read failed as expected 00:08:05.510 0000:00:11.0: read successfully as expected 00:08:05.510 0000:00:12.0: read successfully as expected 00:08:05.510 0000:00:13.0: read successfully as expected 00:08:05.510 0000:00:10.0: read successfully as expected 00:08:05.510 Cleaning up... 00:08:05.510 00:08:05.510 real 0m0.224s 00:08:05.510 user 0m0.068s 00:08:05.510 sys 0m0.107s 00:08:05.510 04:13:51 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.510 ************************************ 00:08:05.510 END TEST nvme_err_injection 00:08:05.510 ************************************ 00:08:05.510 04:13:51 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:05.510 04:13:51 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:05.510 04:13:51 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:05.510 04:13:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.510 04:13:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.510 ************************************ 00:08:05.510 START TEST nvme_overhead 00:08:05.510 ************************************ 00:08:05.510 04:13:51 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:06.888 Initializing NVMe Controllers 00:08:06.888 Attached to 0000:00:13.0 00:08:06.888 Attached to 0000:00:10.0 00:08:06.888 Attached to 0000:00:11.0 00:08:06.888 Attached to 0000:00:12.0 00:08:06.888 Initialization complete. Launching workers. 00:08:06.888 submit (in ns) avg, min, max = 13595.9, 10805.4, 60417.7 00:08:06.888 complete (in ns) avg, min, max = 9363.4, 7817.7, 1748445.4 00:08:06.888 00:08:06.888 Submit histogram 00:08:06.888 ================ 00:08:06.888 Range in us Cumulative Count 00:08:06.888 10.782 - 10.831: 0.0267% ( 1) 00:08:06.888 10.831 - 10.880: 0.0802% ( 2) 00:08:06.888 10.929 - 10.978: 0.1605% ( 3) 00:08:06.888 10.978 - 11.028: 0.2675% ( 4) 00:08:06.888 11.028 - 11.077: 0.5082% ( 9) 00:08:06.888 11.077 - 11.126: 0.7756% ( 10) 00:08:06.888 11.126 - 11.175: 1.2303% ( 17) 00:08:06.889 11.175 - 11.225: 1.4442% ( 8) 00:08:06.889 11.225 - 11.274: 2.2466% ( 30) 00:08:06.889 11.274 - 11.323: 3.8246% ( 59) 00:08:06.889 11.323 - 11.372: 6.8200% ( 112) 00:08:06.889 11.372 - 11.422: 10.8318% ( 150) 00:08:06.889 11.422 - 11.471: 15.7529% ( 184) 00:08:06.889 11.471 - 11.520: 21.3426% ( 209) 00:08:06.889 11.520 - 11.569: 25.9963% ( 174) 00:08:06.889 11.569 - 11.618: 29.0452% ( 114) 00:08:06.889 11.618 - 11.668: 31.6127% ( 96) 00:08:06.889 11.668 - 11.717: 33.8593% ( 84) 00:08:06.889 11.717 - 11.766: 35.5710% ( 64) 00:08:06.889 11.766 - 11.815: 37.3897% ( 68) 00:08:06.889 11.815 - 11.865: 39.0746% ( 63) 00:08:06.889 11.865 - 11.914: 40.3584% ( 48) 00:08:06.889 11.914 - 11.963: 41.8561% ( 56) 00:08:06.889 11.963 - 12.012: 42.9794% ( 42) 00:08:06.889 12.012 - 12.062: 44.1027% ( 42) 00:08:06.889 12.062 - 12.111: 45.0388% ( 35) 00:08:06.889 12.111 - 12.160: 45.8679% ( 31) 00:08:06.889 12.160 - 12.209: 46.2958% ( 16) 00:08:06.889 12.209 - 12.258: 46.7505% ( 17) 00:08:06.889 12.258 - 12.308: 46.9644% ( 8) 00:08:06.889 12.308 - 12.357: 47.2319% ( 10) 00:08:06.889 12.357 - 12.406: 47.4993% ( 10) 00:08:06.889 12.406 - 12.455: 47.7935% ( 11) 00:08:06.889 12.455 - 12.505: 47.9540% ( 6) 00:08:06.889 12.505 - 12.554: 48.0610% ( 4) 00:08:06.889 12.554 - 12.603: 48.1145% ( 2) 00:08:06.889 12.603 - 12.702: 48.4087% ( 11) 00:08:06.889 12.702 - 12.800: 48.4622% ( 2) 00:08:06.889 12.800 - 12.898: 48.5691% ( 4) 00:08:06.889 12.898 - 12.997: 48.6761% ( 4) 00:08:06.889 12.997 - 13.095: 48.8366% ( 6) 00:08:06.889 13.095 - 13.194: 48.9703% ( 5) 00:08:06.889 13.194 - 13.292: 49.1843% ( 8) 00:08:06.889 13.292 - 13.391: 49.6389% ( 17) 00:08:06.889 13.391 - 13.489: 50.1471% ( 19) 00:08:06.889 13.489 - 13.588: 50.9227% ( 29) 00:08:06.889 13.588 - 13.686: 51.9390% ( 38) 00:08:06.889 13.686 - 13.785: 53.9182% ( 74) 00:08:06.889 13.785 - 13.883: 56.5659% ( 99) 00:08:06.889 13.883 - 13.982: 59.6951% ( 117) 00:08:06.889 13.982 - 14.080: 63.0382% ( 125) 00:08:06.889 14.080 - 14.178: 66.5151% ( 130) 00:08:06.889 14.178 - 14.277: 69.4303% ( 109) 00:08:06.889 14.277 - 14.375: 72.1851% ( 103) 00:08:06.889 14.375 - 14.474: 73.9503% ( 66) 00:08:06.889 14.474 - 14.572: 75.3677% ( 53) 00:08:06.889 14.572 - 14.671: 76.8922% ( 57) 00:08:06.889 14.671 - 14.769: 78.6574% ( 66) 00:08:06.889 14.769 - 14.868: 80.2621% ( 60) 00:08:06.889 14.868 - 14.966: 82.1075% ( 69) 00:08:06.889 14.966 - 15.065: 83.5518% ( 54) 00:08:06.889 15.065 - 15.163: 84.5948% ( 39) 00:08:06.889 15.163 - 15.262: 85.4239% ( 31) 00:08:06.889 15.262 - 15.360: 86.4937% ( 40) 00:08:06.889 15.360 - 15.458: 86.8681% ( 14) 00:08:06.889 15.458 - 15.557: 87.3496% ( 18) 00:08:06.889 15.557 - 15.655: 87.5368% ( 7) 00:08:06.889 15.655 - 15.754: 87.7240% ( 7) 00:08:06.889 15.754 - 15.852: 88.0182% ( 11) 00:08:06.889 15.852 - 15.951: 88.0984% ( 3) 00:08:06.889 15.951 - 16.049: 88.2321% ( 5) 00:08:06.889 16.049 - 16.148: 88.3659% ( 5) 00:08:06.889 16.148 - 16.246: 88.5798% ( 8) 00:08:06.889 16.246 - 16.345: 88.7136% ( 5) 00:08:06.889 16.345 - 16.443: 88.8205% ( 4) 00:08:06.889 16.443 - 16.542: 89.0345% ( 8) 00:08:06.889 16.542 - 16.640: 89.1147% ( 3) 00:08:06.889 16.640 - 16.738: 89.2217% ( 4) 00:08:06.889 16.738 - 16.837: 89.3822% ( 6) 00:08:06.889 16.837 - 16.935: 89.5159% ( 5) 00:08:06.889 16.935 - 17.034: 89.8903% ( 14) 00:08:06.889 17.034 - 17.132: 90.3183% ( 16) 00:08:06.889 17.132 - 17.231: 90.7462% ( 16) 00:08:06.889 17.231 - 17.329: 91.0136% ( 10) 00:08:06.889 17.329 - 17.428: 91.3346% ( 12) 00:08:06.889 17.428 - 17.526: 91.7358% ( 15) 00:08:06.889 17.526 - 17.625: 92.1904% ( 17) 00:08:06.889 17.625 - 17.723: 92.7253% ( 20) 00:08:06.889 17.723 - 17.822: 92.9660% ( 9) 00:08:06.889 17.822 - 17.920: 93.2067% ( 9) 00:08:06.889 17.920 - 18.018: 93.6079% ( 15) 00:08:06.889 18.018 - 18.117: 94.0091% ( 15) 00:08:06.889 18.117 - 18.215: 94.2765% ( 10) 00:08:06.889 18.215 - 18.314: 94.5975% ( 12) 00:08:06.889 18.314 - 18.412: 94.7580% ( 6) 00:08:06.889 18.412 - 18.511: 94.8649% ( 4) 00:08:06.889 18.511 - 18.609: 94.9719% ( 4) 00:08:06.889 18.609 - 18.708: 95.1324% ( 6) 00:08:06.889 18.708 - 18.806: 95.2929% ( 6) 00:08:06.889 18.806 - 18.905: 95.5336% ( 9) 00:08:06.889 18.905 - 19.003: 95.7208% ( 7) 00:08:06.889 19.003 - 19.102: 95.9080% ( 7) 00:08:06.889 19.102 - 19.200: 96.1220% ( 8) 00:08:06.889 19.200 - 19.298: 96.3627% ( 9) 00:08:06.889 19.298 - 19.397: 96.5231% ( 6) 00:08:06.889 19.397 - 19.495: 96.6301% ( 4) 00:08:06.889 19.495 - 19.594: 96.7104% ( 3) 00:08:06.889 19.594 - 19.692: 96.8173% ( 4) 00:08:06.889 19.692 - 19.791: 97.0045% ( 7) 00:08:06.889 19.791 - 19.889: 97.0848% ( 3) 00:08:06.889 19.889 - 19.988: 97.2453% ( 6) 00:08:06.889 19.988 - 20.086: 97.4860% ( 9) 00:08:06.889 20.086 - 20.185: 97.5929% ( 4) 00:08:06.889 20.185 - 20.283: 97.6732% ( 3) 00:08:06.889 20.283 - 20.382: 97.8336% ( 6) 00:08:06.889 20.382 - 20.480: 97.9139% ( 3) 00:08:06.889 20.480 - 20.578: 97.9674% ( 2) 00:08:06.889 20.578 - 20.677: 98.0209% ( 2) 00:08:06.889 20.677 - 20.775: 98.1278% ( 4) 00:08:06.889 20.775 - 20.874: 98.1813% ( 2) 00:08:06.889 20.874 - 20.972: 98.2616% ( 3) 00:08:06.889 20.972 - 21.071: 98.3151% ( 2) 00:08:06.889 21.071 - 21.169: 98.3685% ( 2) 00:08:06.889 21.169 - 21.268: 98.4488% ( 3) 00:08:06.889 21.268 - 21.366: 98.5023% ( 2) 00:08:06.889 21.366 - 21.465: 98.5825% ( 3) 00:08:06.889 21.465 - 21.563: 98.6627% ( 3) 00:08:06.889 21.563 - 21.662: 98.7162% ( 2) 00:08:06.889 21.662 - 21.760: 98.8232% ( 4) 00:08:06.889 21.760 - 21.858: 98.9035% ( 3) 00:08:06.889 21.858 - 21.957: 98.9302% ( 1) 00:08:06.889 21.957 - 22.055: 99.0372% ( 4) 00:08:06.889 22.055 - 22.154: 99.0639% ( 1) 00:08:06.889 22.252 - 22.351: 99.0907% ( 1) 00:08:06.889 22.351 - 22.449: 99.1174% ( 1) 00:08:06.889 22.449 - 22.548: 99.1976% ( 3) 00:08:06.889 22.548 - 22.646: 99.2244% ( 1) 00:08:06.889 22.745 - 22.843: 99.2511% ( 1) 00:08:06.889 22.942 - 23.040: 99.2779% ( 1) 00:08:06.889 23.040 - 23.138: 99.3046% ( 1) 00:08:06.889 23.138 - 23.237: 99.3314% ( 1) 00:08:06.889 23.335 - 23.434: 99.3581% ( 1) 00:08:06.889 23.434 - 23.532: 99.3849% ( 1) 00:08:06.889 23.631 - 23.729: 99.4116% ( 1) 00:08:06.889 23.729 - 23.828: 99.4651% ( 2) 00:08:06.889 23.926 - 24.025: 99.4918% ( 1) 00:08:06.889 24.123 - 24.222: 99.5186% ( 1) 00:08:06.889 24.222 - 24.320: 99.5453% ( 1) 00:08:06.889 24.320 - 24.418: 99.5721% ( 1) 00:08:06.889 24.615 - 24.714: 99.5988% ( 1) 00:08:06.889 24.911 - 25.009: 99.6256% ( 1) 00:08:06.889 25.009 - 25.108: 99.6791% ( 2) 00:08:06.889 25.108 - 25.206: 99.7058% ( 1) 00:08:06.889 25.206 - 25.403: 99.7325% ( 1) 00:08:06.889 25.797 - 25.994: 99.7593% ( 1) 00:08:06.889 27.175 - 27.372: 99.7860% ( 1) 00:08:06.889 27.372 - 27.569: 99.8128% ( 1) 00:08:06.889 29.342 - 29.538: 99.8395% ( 1) 00:08:06.889 30.917 - 31.114: 99.8663% ( 1) 00:08:06.889 31.311 - 31.508: 99.8930% ( 1) 00:08:06.889 55.138 - 55.532: 99.9198% ( 1) 00:08:06.889 56.714 - 57.108: 99.9465% ( 1) 00:08:06.889 57.108 - 57.502: 99.9733% ( 1) 00:08:06.889 60.258 - 60.652: 100.0000% ( 1) 00:08:06.889 00:08:06.889 Complete histogram 00:08:06.889 ================== 00:08:06.889 Range in us Cumulative Count 00:08:06.889 7.778 - 7.828: 0.0267% ( 1) 00:08:06.889 7.828 - 7.877: 0.0802% ( 2) 00:08:06.889 7.877 - 7.926: 1.3640% ( 48) 00:08:06.889 7.926 - 7.975: 5.9374% ( 171) 00:08:06.889 7.975 - 8.025: 11.7679% ( 218) 00:08:06.889 8.025 - 8.074: 18.2937% ( 244) 00:08:06.889 8.074 - 8.123: 24.8195% ( 244) 00:08:06.889 8.123 - 8.172: 31.1313% ( 236) 00:08:06.889 8.172 - 8.222: 37.4967% ( 238) 00:08:06.889 8.222 - 8.271: 42.9794% ( 205) 00:08:06.889 8.271 - 8.320: 48.3552% ( 201) 00:08:06.889 8.320 - 8.369: 53.1693% ( 180) 00:08:06.889 8.369 - 8.418: 58.3846% ( 195) 00:08:06.889 8.418 - 8.468: 62.5301% ( 155) 00:08:06.889 8.468 - 8.517: 65.6593% ( 117) 00:08:06.889 8.517 - 8.566: 68.2535% ( 97) 00:08:06.889 8.566 - 8.615: 70.4199% ( 81) 00:08:06.889 8.615 - 8.665: 71.8909% ( 55) 00:08:06.889 8.665 - 8.714: 72.9874% ( 41) 00:08:06.890 8.714 - 8.763: 73.9770% ( 37) 00:08:06.890 8.763 - 8.812: 74.6189% ( 24) 00:08:06.890 8.812 - 8.862: 75.1538% ( 20) 00:08:06.890 8.862 - 8.911: 75.6887% ( 20) 00:08:06.890 8.911 - 8.960: 76.0364% ( 13) 00:08:06.890 8.960 - 9.009: 76.5178% ( 18) 00:08:06.890 9.009 - 9.058: 77.0794% ( 21) 00:08:06.890 9.058 - 9.108: 77.5341% ( 17) 00:08:06.890 9.108 - 9.157: 78.2830% ( 28) 00:08:06.890 9.157 - 9.206: 78.9248% ( 24) 00:08:06.890 9.206 - 9.255: 79.3260% ( 15) 00:08:06.890 9.255 - 9.305: 80.0481% ( 27) 00:08:06.890 9.305 - 9.354: 80.7703% ( 27) 00:08:06.890 9.354 - 9.403: 81.5191% ( 28) 00:08:06.890 9.403 - 9.452: 82.1878% ( 25) 00:08:06.890 9.452 - 9.502: 82.8564% ( 25) 00:08:06.890 9.502 - 9.551: 83.0971% ( 9) 00:08:06.890 9.551 - 9.600: 83.6320% ( 20) 00:08:06.890 9.600 - 9.649: 84.1134% ( 18) 00:08:06.890 9.649 - 9.698: 84.4611% ( 13) 00:08:06.890 9.698 - 9.748: 84.7553% ( 11) 00:08:06.890 9.748 - 9.797: 85.3972% ( 24) 00:08:06.890 9.797 - 9.846: 86.0658% ( 25) 00:08:06.890 9.846 - 9.895: 86.8681% ( 30) 00:08:06.890 9.895 - 9.945: 87.6972% ( 31) 00:08:06.890 9.945 - 9.994: 88.6066% ( 34) 00:08:06.890 9.994 - 10.043: 89.5427% ( 35) 00:08:06.890 10.043 - 10.092: 90.2113% ( 25) 00:08:06.890 10.092 - 10.142: 91.0671% ( 32) 00:08:06.890 10.142 - 10.191: 91.7625% ( 26) 00:08:06.890 10.191 - 10.240: 92.5916% ( 31) 00:08:06.890 10.240 - 10.289: 93.1800% ( 22) 00:08:06.890 10.289 - 10.338: 93.9556% ( 29) 00:08:06.890 10.338 - 10.388: 94.4370% ( 18) 00:08:06.890 10.388 - 10.437: 94.7580% ( 12) 00:08:06.890 10.437 - 10.486: 95.1324% ( 14) 00:08:06.890 10.486 - 10.535: 95.5068% ( 14) 00:08:06.890 10.535 - 10.585: 95.7475% ( 9) 00:08:06.890 10.585 - 10.634: 95.9347% ( 7) 00:08:06.890 10.634 - 10.683: 96.0685% ( 5) 00:08:06.890 10.683 - 10.732: 96.1487% ( 3) 00:08:06.890 10.732 - 10.782: 96.2022% ( 2) 00:08:06.890 10.782 - 10.831: 96.2557% ( 2) 00:08:06.890 10.831 - 10.880: 96.3092% ( 2) 00:08:06.890 10.880 - 10.929: 96.4162% ( 4) 00:08:06.890 10.929 - 10.978: 96.4429% ( 1) 00:08:06.890 10.978 - 11.028: 96.4964% ( 2) 00:08:06.890 11.028 - 11.077: 96.5231% ( 1) 00:08:06.890 11.077 - 11.126: 96.6034% ( 3) 00:08:06.890 11.126 - 11.175: 96.6569% ( 2) 00:08:06.890 11.175 - 11.225: 96.6836% ( 1) 00:08:06.890 11.225 - 11.274: 96.7104% ( 1) 00:08:06.890 11.274 - 11.323: 96.7638% ( 2) 00:08:06.890 11.323 - 11.372: 96.8173% ( 2) 00:08:06.890 11.372 - 11.422: 96.8976% ( 3) 00:08:06.890 11.422 - 11.471: 96.9511% ( 2) 00:08:06.890 11.471 - 11.520: 97.0045% ( 2) 00:08:06.890 11.520 - 11.569: 97.0313% ( 1) 00:08:06.890 11.618 - 11.668: 97.1383% ( 4) 00:08:06.890 11.668 - 11.717: 97.1650% ( 1) 00:08:06.890 11.717 - 11.766: 97.1918% ( 1) 00:08:06.890 11.766 - 11.815: 97.2453% ( 2) 00:08:06.890 11.815 - 11.865: 97.2720% ( 1) 00:08:06.890 11.914 - 11.963: 97.2987% ( 1) 00:08:06.890 12.012 - 12.062: 97.3255% ( 1) 00:08:06.890 12.111 - 12.160: 97.3522% ( 1) 00:08:06.890 12.209 - 12.258: 97.3790% ( 1) 00:08:06.890 12.258 - 12.308: 97.4057% ( 1) 00:08:06.890 12.357 - 12.406: 97.4325% ( 1) 00:08:06.890 12.406 - 12.455: 97.4860% ( 2) 00:08:06.890 12.702 - 12.800: 97.5127% ( 1) 00:08:06.890 12.800 - 12.898: 97.5662% ( 2) 00:08:06.890 13.489 - 13.588: 97.6197% ( 2) 00:08:06.890 13.588 - 13.686: 97.6732% ( 2) 00:08:06.890 13.686 - 13.785: 97.7267% ( 2) 00:08:06.890 13.785 - 13.883: 97.8336% ( 4) 00:08:06.890 13.883 - 13.982: 97.9139% ( 3) 00:08:06.890 13.982 - 14.080: 97.9674% ( 2) 00:08:06.890 14.080 - 14.178: 98.0744% ( 4) 00:08:06.890 14.178 - 14.277: 98.1546% ( 3) 00:08:06.890 14.277 - 14.375: 98.2081% ( 2) 00:08:06.890 14.375 - 14.474: 98.2616% ( 2) 00:08:06.890 14.474 - 14.572: 98.2883% ( 1) 00:08:06.890 14.671 - 14.769: 98.3685% ( 3) 00:08:06.890 14.769 - 14.868: 98.4488% ( 3) 00:08:06.890 14.868 - 14.966: 98.5023% ( 2) 00:08:06.890 14.966 - 15.065: 98.6093% ( 4) 00:08:06.890 15.065 - 15.163: 98.6360% ( 1) 00:08:06.890 15.163 - 15.262: 98.6627% ( 1) 00:08:06.890 15.360 - 15.458: 98.7430% ( 3) 00:08:06.890 15.458 - 15.557: 98.7697% ( 1) 00:08:06.890 15.557 - 15.655: 98.8232% ( 2) 00:08:06.890 15.655 - 15.754: 98.8767% ( 2) 00:08:06.890 15.754 - 15.852: 98.9569% ( 3) 00:08:06.890 15.852 - 15.951: 99.0372% ( 3) 00:08:06.890 16.049 - 16.148: 99.0639% ( 1) 00:08:06.890 16.148 - 16.246: 99.1174% ( 2) 00:08:06.890 16.246 - 16.345: 99.1709% ( 2) 00:08:06.890 16.443 - 16.542: 99.1976% ( 1) 00:08:06.890 16.542 - 16.640: 99.2244% ( 1) 00:08:06.890 16.837 - 16.935: 99.2511% ( 1) 00:08:06.890 17.132 - 17.231: 99.2779% ( 1) 00:08:06.890 17.231 - 17.329: 99.3046% ( 1) 00:08:06.890 17.526 - 17.625: 99.3849% ( 3) 00:08:06.890 17.920 - 18.018: 99.4116% ( 1) 00:08:06.890 18.905 - 19.003: 99.4384% ( 1) 00:08:06.890 19.003 - 19.102: 99.4651% ( 1) 00:08:06.890 19.200 - 19.298: 99.4918% ( 1) 00:08:06.890 20.972 - 21.071: 99.5186% ( 1) 00:08:06.890 21.760 - 21.858: 99.5453% ( 1) 00:08:06.890 24.320 - 24.418: 99.5721% ( 1) 00:08:06.890 24.812 - 24.911: 99.5988% ( 1) 00:08:06.890 25.009 - 25.108: 99.6256% ( 1) 00:08:06.890 25.108 - 25.206: 99.6523% ( 1) 00:08:06.890 25.403 - 25.600: 99.7058% ( 2) 00:08:06.890 25.600 - 25.797: 99.7325% ( 1) 00:08:06.890 30.326 - 30.523: 99.7593% ( 1) 00:08:06.890 31.508 - 31.705: 99.7860% ( 1) 00:08:06.890 38.006 - 38.203: 99.8128% ( 1) 00:08:06.890 39.582 - 39.778: 99.8395% ( 1) 00:08:06.890 43.323 - 43.520: 99.8663% ( 1) 00:08:06.890 49.231 - 49.428: 99.8930% ( 1) 00:08:06.890 77.982 - 78.375: 99.9198% ( 1) 00:08:06.890 91.766 - 92.160: 99.9465% ( 1) 00:08:06.890 93.735 - 94.129: 99.9733% ( 1) 00:08:06.890 1739.225 - 1751.828: 100.0000% ( 1) 00:08:06.890 00:08:06.890 00:08:06.890 real 0m1.211s 00:08:06.890 user 0m1.069s 00:08:06.890 sys 0m0.089s 00:08:06.890 04:13:52 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.890 04:13:52 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:06.890 ************************************ 00:08:06.890 END TEST nvme_overhead 00:08:06.890 ************************************ 00:08:06.890 04:13:52 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:06.890 04:13:52 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:06.890 04:13:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.890 04:13:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.890 ************************************ 00:08:06.890 START TEST nvme_arbitration 00:08:06.890 ************************************ 00:08:06.890 04:13:52 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:10.180 Initializing NVMe Controllers 00:08:10.180 Attached to 0000:00:13.0 00:08:10.180 Attached to 0000:00:10.0 00:08:10.180 Attached to 0000:00:11.0 00:08:10.180 Attached to 0000:00:12.0 00:08:10.180 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:10.180 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:10.180 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:10.180 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:10.180 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:10.180 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:10.180 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:10.180 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:10.180 Initialization complete. Launching workers. 00:08:10.180 Starting thread on core 1 with urgent priority queue 00:08:10.180 Starting thread on core 2 with urgent priority queue 00:08:10.180 Starting thread on core 3 with urgent priority queue 00:08:10.180 Starting thread on core 0 with urgent priority queue 00:08:10.180 QEMU NVMe Ctrl (12343 ) core 0: 6741.33 IO/s 14.83 secs/100000 ios 00:08:10.180 QEMU NVMe Ctrl (12342 ) core 0: 6741.33 IO/s 14.83 secs/100000 ios 00:08:10.180 QEMU NVMe Ctrl (12340 ) core 1: 6570.67 IO/s 15.22 secs/100000 ios 00:08:10.180 QEMU NVMe Ctrl (12342 ) core 1: 6570.67 IO/s 15.22 secs/100000 ios 00:08:10.180 QEMU NVMe Ctrl (12341 ) core 2: 6485.33 IO/s 15.42 secs/100000 ios 00:08:10.180 QEMU NVMe Ctrl (12342 ) core 3: 6336.00 IO/s 15.78 secs/100000 ios 00:08:10.180 ======================================================== 00:08:10.180 00:08:10.180 00:08:10.180 real 0m3.218s 00:08:10.180 user 0m9.020s 00:08:10.180 sys 0m0.095s 00:08:10.180 04:13:55 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.180 ************************************ 00:08:10.180 END TEST nvme_arbitration 00:08:10.180 ************************************ 00:08:10.180 04:13:55 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:10.180 04:13:55 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:10.180 04:13:55 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:10.180 04:13:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.180 04:13:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.180 ************************************ 00:08:10.180 START TEST nvme_single_aen 00:08:10.180 ************************************ 00:08:10.180 04:13:55 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:10.181 Asynchronous Event Request test 00:08:10.181 Attached to 0000:00:13.0 00:08:10.181 Attached to 0000:00:10.0 00:08:10.181 Attached to 0000:00:11.0 00:08:10.181 Attached to 0000:00:12.0 00:08:10.181 Reset controller to setup AER completions for this process 00:08:10.181 Registering asynchronous event callbacks... 00:08:10.181 Getting orig temperature thresholds of all controllers 00:08:10.181 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:10.181 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:10.181 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:10.181 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:10.181 Setting all controllers temperature threshold low to trigger AER 00:08:10.181 Waiting for all controllers temperature threshold to be set lower 00:08:10.181 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:10.181 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:10.181 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:10.181 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:10.181 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:10.181 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:10.181 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:10.181 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:10.181 Waiting for all controllers to trigger AER and reset threshold 00:08:10.181 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.181 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.181 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.181 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.181 Cleaning up... 00:08:10.498 ************************************ 00:08:10.498 END TEST nvme_single_aen 00:08:10.498 ************************************ 00:08:10.498 00:08:10.498 real 0m0.201s 00:08:10.498 user 0m0.068s 00:08:10.498 sys 0m0.088s 00:08:10.498 04:13:55 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.498 04:13:55 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:10.498 04:13:55 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:10.498 04:13:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.498 04:13:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.498 04:13:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.498 ************************************ 00:08:10.498 START TEST nvme_doorbell_aers 00:08:10.498 ************************************ 00:08:10.498 04:13:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:10.498 04:13:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:10.498 04:13:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:10.498 04:13:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:10.498 04:13:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:10.498 04:13:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:10.498 04:13:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:10.498 04:13:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:10.498 04:13:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:10.498 04:13:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:10.498 04:13:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:10.498 04:13:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:10.498 04:13:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:10.498 04:13:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:10.498 [2024-11-17 04:13:56.187884] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:20.496 Executing: test_write_invalid_db 00:08:20.496 Waiting for AER completion... 00:08:20.496 Failure: test_write_invalid_db 00:08:20.496 00:08:20.496 Executing: test_invalid_db_write_overflow_sq 00:08:20.496 Waiting for AER completion... 00:08:20.496 Failure: test_invalid_db_write_overflow_sq 00:08:20.496 00:08:20.496 Executing: test_invalid_db_write_overflow_cq 00:08:20.496 Waiting for AER completion... 00:08:20.496 Failure: test_invalid_db_write_overflow_cq 00:08:20.496 00:08:20.496 04:14:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:20.496 04:14:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:20.754 [2024-11-17 04:14:06.228537] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:30.737 Executing: test_write_invalid_db 00:08:30.737 Waiting for AER completion... 00:08:30.737 Failure: test_write_invalid_db 00:08:30.737 00:08:30.737 Executing: test_invalid_db_write_overflow_sq 00:08:30.737 Waiting for AER completion... 00:08:30.737 Failure: test_invalid_db_write_overflow_sq 00:08:30.737 00:08:30.737 Executing: test_invalid_db_write_overflow_cq 00:08:30.737 Waiting for AER completion... 00:08:30.737 Failure: test_invalid_db_write_overflow_cq 00:08:30.737 00:08:30.737 04:14:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:30.737 04:14:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:30.737 [2024-11-17 04:14:16.265798] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:40.700 Executing: test_write_invalid_db 00:08:40.700 Waiting for AER completion... 00:08:40.700 Failure: test_write_invalid_db 00:08:40.700 00:08:40.700 Executing: test_invalid_db_write_overflow_sq 00:08:40.700 Waiting for AER completion... 00:08:40.700 Failure: test_invalid_db_write_overflow_sq 00:08:40.700 00:08:40.700 Executing: test_invalid_db_write_overflow_cq 00:08:40.700 Waiting for AER completion... 00:08:40.700 Failure: test_invalid_db_write_overflow_cq 00:08:40.700 00:08:40.700 04:14:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:40.700 04:14:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:40.700 [2024-11-17 04:14:26.294004] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:50.662 Executing: test_write_invalid_db 00:08:50.662 Waiting for AER completion... 00:08:50.662 Failure: test_write_invalid_db 00:08:50.662 00:08:50.662 Executing: test_invalid_db_write_overflow_sq 00:08:50.662 Waiting for AER completion... 00:08:50.662 Failure: test_invalid_db_write_overflow_sq 00:08:50.662 00:08:50.662 Executing: test_invalid_db_write_overflow_cq 00:08:50.662 Waiting for AER completion... 00:08:50.662 Failure: test_invalid_db_write_overflow_cq 00:08:50.662 00:08:50.662 00:08:50.662 real 0m40.179s 00:08:50.662 user 0m34.436s 00:08:50.662 sys 0m5.391s 00:08:50.662 04:14:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.662 04:14:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:50.662 ************************************ 00:08:50.662 END TEST nvme_doorbell_aers 00:08:50.662 ************************************ 00:08:50.662 04:14:36 nvme -- nvme/nvme.sh@97 -- # uname 00:08:50.662 04:14:36 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:50.662 04:14:36 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:50.662 04:14:36 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:50.662 04:14:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.662 04:14:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.662 ************************************ 00:08:50.662 START TEST nvme_multi_aen 00:08:50.662 ************************************ 00:08:50.662 04:14:36 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:50.662 [2024-11-17 04:14:36.348089] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:50.662 [2024-11-17 04:14:36.348146] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:50.662 [2024-11-17 04:14:36.348158] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:50.662 [2024-11-17 04:14:36.349353] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:50.662 [2024-11-17 04:14:36.349381] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:50.662 [2024-11-17 04:14:36.349390] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:50.662 [2024-11-17 04:14:36.350327] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:50.662 [2024-11-17 04:14:36.350349] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:50.662 [2024-11-17 04:14:36.350356] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:50.662 [2024-11-17 04:14:36.351296] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:50.662 [2024-11-17 04:14:36.351317] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:50.662 [2024-11-17 04:14:36.351324] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74997) is not found. Dropping the request. 00:08:50.662 Child process pid: 75523 00:08:50.920 [Child] Asynchronous Event Request test 00:08:50.920 [Child] Attached to 0000:00:13.0 00:08:50.920 [Child] Attached to 0000:00:10.0 00:08:50.920 [Child] Attached to 0000:00:11.0 00:08:50.920 [Child] Attached to 0000:00:12.0 00:08:50.920 [Child] Registering asynchronous event callbacks... 00:08:50.920 [Child] Getting orig temperature thresholds of all controllers 00:08:50.920 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.920 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.920 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.920 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.920 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:50.920 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.920 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.920 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.920 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.920 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.920 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.920 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.920 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.920 [Child] Cleaning up... 00:08:50.920 Asynchronous Event Request test 00:08:50.920 Attached to 0000:00:13.0 00:08:50.920 Attached to 0000:00:10.0 00:08:50.920 Attached to 0000:00:11.0 00:08:50.920 Attached to 0000:00:12.0 00:08:50.920 Reset controller to setup AER completions for this process 00:08:50.920 Registering asynchronous event callbacks... 00:08:50.920 Getting orig temperature thresholds of all controllers 00:08:50.920 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.920 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.920 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.920 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.920 Setting all controllers temperature threshold low to trigger AER 00:08:50.920 Waiting for all controllers temperature threshold to be set lower 00:08:50.920 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.920 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:50.920 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.920 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:50.920 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.920 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:50.920 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.920 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:50.920 Waiting for all controllers to trigger AER and reset threshold 00:08:50.920 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.920 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.920 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.920 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.920 Cleaning up... 00:08:50.920 00:08:50.920 real 0m0.390s 00:08:50.921 user 0m0.120s 00:08:50.921 sys 0m0.170s 00:08:50.921 04:14:36 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.921 04:14:36 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:50.921 ************************************ 00:08:50.921 END TEST nvme_multi_aen 00:08:50.921 ************************************ 00:08:50.921 04:14:36 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:50.921 04:14:36 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:50.921 04:14:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.921 04:14:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.921 ************************************ 00:08:50.921 START TEST nvme_startup 00:08:50.921 ************************************ 00:08:50.921 04:14:36 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:51.178 Initializing NVMe Controllers 00:08:51.178 Attached to 0000:00:13.0 00:08:51.178 Attached to 0000:00:10.0 00:08:51.178 Attached to 0000:00:11.0 00:08:51.178 Attached to 0000:00:12.0 00:08:51.178 Initialization complete. 00:08:51.178 Time used:131858.547 (us). 00:08:51.178 00:08:51.178 real 0m0.186s 00:08:51.178 user 0m0.062s 00:08:51.178 sys 0m0.080s 00:08:51.178 04:14:36 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.178 04:14:36 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:51.178 ************************************ 00:08:51.178 END TEST nvme_startup 00:08:51.178 ************************************ 00:08:51.178 04:14:36 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:51.178 04:14:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.178 04:14:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.178 04:14:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.178 ************************************ 00:08:51.178 START TEST nvme_multi_secondary 00:08:51.178 ************************************ 00:08:51.178 04:14:36 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:51.178 04:14:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75568 00:08:51.178 04:14:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75569 00:08:51.179 04:14:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:51.179 04:14:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:51.179 04:14:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:54.455 Initializing NVMe Controllers 00:08:54.455 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:54.455 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:54.455 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:54.455 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:54.455 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:54.455 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:54.455 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:54.455 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:54.455 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:54.455 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:54.455 Initialization complete. Launching workers. 00:08:54.455 ======================================================== 00:08:54.455 Latency(us) 00:08:54.455 Device Information : IOPS MiB/s Average min max 00:08:54.455 PCIE (0000:00:13.0) NSID 1 from core 1: 8151.88 31.84 1962.33 711.67 5397.73 00:08:54.455 PCIE (0000:00:10.0) NSID 1 from core 1: 8151.88 31.84 1961.56 693.12 5293.26 00:08:54.455 PCIE (0000:00:11.0) NSID 1 from core 1: 8151.88 31.84 1962.51 707.40 5244.89 00:08:54.455 PCIE (0000:00:12.0) NSID 1 from core 1: 8151.88 31.84 1962.51 704.95 5417.40 00:08:54.455 PCIE (0000:00:12.0) NSID 2 from core 1: 8151.88 31.84 1962.53 709.96 5678.97 00:08:54.455 PCIE (0000:00:12.0) NSID 3 from core 1: 8151.88 31.84 1962.59 712.23 5458.66 00:08:54.455 ======================================================== 00:08:54.455 Total : 48911.27 191.06 1962.34 693.12 5678.97 00:08:54.455 00:08:54.455 Initializing NVMe Controllers 00:08:54.455 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:54.455 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:54.455 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:54.455 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:54.455 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:54.455 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:54.455 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:54.455 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:54.455 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:54.455 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:54.455 Initialization complete. Launching workers. 00:08:54.455 ======================================================== 00:08:54.455 Latency(us) 00:08:54.455 Device Information : IOPS MiB/s Average min max 00:08:54.455 PCIE (0000:00:13.0) NSID 1 from core 2: 3358.98 13.12 4762.95 1354.82 16318.93 00:08:54.455 PCIE (0000:00:10.0) NSID 1 from core 2: 3358.98 13.12 4762.32 1275.33 12491.23 00:08:54.455 PCIE (0000:00:11.0) NSID 1 from core 2: 3358.98 13.12 4763.15 1318.66 12104.11 00:08:54.455 PCIE (0000:00:12.0) NSID 1 from core 2: 3358.98 13.12 4763.10 1214.62 12416.88 00:08:54.455 PCIE (0000:00:12.0) NSID 2 from core 2: 3358.98 13.12 4763.11 1145.32 12322.56 00:08:54.455 PCIE (0000:00:12.0) NSID 3 from core 2: 3358.98 13.12 4763.07 1030.25 12566.91 00:08:54.455 ======================================================== 00:08:54.455 Total : 20153.88 78.73 4762.95 1030.25 16318.93 00:08:54.455 00:08:54.455 04:14:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75568 00:08:56.983 Initializing NVMe Controllers 00:08:56.983 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:56.983 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:56.983 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:56.983 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:56.983 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:56.983 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:56.983 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:56.983 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:56.983 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:56.983 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:56.983 Initialization complete. Launching workers. 00:08:56.983 ======================================================== 00:08:56.983 Latency(us) 00:08:56.983 Device Information : IOPS MiB/s Average min max 00:08:56.983 PCIE (0000:00:13.0) NSID 1 from core 0: 11228.55 43.86 1424.56 670.35 6428.02 00:08:56.983 PCIE (0000:00:10.0) NSID 1 from core 0: 11228.55 43.86 1423.72 652.23 6415.69 00:08:56.983 PCIE (0000:00:11.0) NSID 1 from core 0: 11228.55 43.86 1424.51 671.94 5610.22 00:08:56.983 PCIE (0000:00:12.0) NSID 1 from core 0: 11228.55 43.86 1424.47 618.50 5866.41 00:08:56.983 PCIE (0000:00:12.0) NSID 2 from core 0: 11228.55 43.86 1424.45 497.03 5863.78 00:08:56.983 PCIE (0000:00:12.0) NSID 3 from core 0: 11228.55 43.86 1424.41 382.85 5977.83 00:08:56.983 ======================================================== 00:08:56.983 Total : 67371.32 263.17 1424.35 382.85 6428.02 00:08:56.983 00:08:56.983 04:14:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75569 00:08:56.983 04:14:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75644 00:08:56.983 04:14:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:56.983 04:14:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75645 00:08:56.983 04:14:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:56.983 04:14:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:00.261 Initializing NVMe Controllers 00:09:00.261 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:00.261 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:00.261 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:00.261 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:00.261 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:00.261 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:00.261 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:00.261 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:00.261 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:00.261 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:00.261 Initialization complete. Launching workers. 00:09:00.261 ======================================================== 00:09:00.261 Latency(us) 00:09:00.261 Device Information : IOPS MiB/s Average min max 00:09:00.261 PCIE (0000:00:13.0) NSID 1 from core 0: 8367.07 32.68 1911.85 688.64 6973.19 00:09:00.261 PCIE (0000:00:10.0) NSID 1 from core 0: 8367.07 32.68 1911.00 669.08 7093.49 00:09:00.261 PCIE (0000:00:11.0) NSID 1 from core 0: 8367.07 32.68 1911.94 689.61 7119.89 00:09:00.261 PCIE (0000:00:12.0) NSID 1 from core 0: 8367.07 32.68 1911.93 689.32 7120.02 00:09:00.261 PCIE (0000:00:12.0) NSID 2 from core 0: 8367.07 32.68 1911.99 691.09 6770.50 00:09:00.261 PCIE (0000:00:12.0) NSID 3 from core 0: 8367.07 32.68 1911.89 683.78 6788.90 00:09:00.261 ======================================================== 00:09:00.261 Total : 50202.43 196.10 1911.77 669.08 7120.02 00:09:00.261 00:09:00.261 Initializing NVMe Controllers 00:09:00.261 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:00.261 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:00.261 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:00.261 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:00.261 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:00.261 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:00.261 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:00.261 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:00.261 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:00.261 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:00.261 Initialization complete. Launching workers. 00:09:00.261 ======================================================== 00:09:00.261 Latency(us) 00:09:00.261 Device Information : IOPS MiB/s Average min max 00:09:00.261 PCIE (0000:00:13.0) NSID 1 from core 1: 8292.31 32.39 1929.06 710.80 5942.72 00:09:00.261 PCIE (0000:00:10.0) NSID 1 from core 1: 8292.31 32.39 1928.18 700.51 5703.55 00:09:00.261 PCIE (0000:00:11.0) NSID 1 from core 1: 8292.31 32.39 1929.02 716.02 5301.29 00:09:00.261 PCIE (0000:00:12.0) NSID 1 from core 1: 8292.31 32.39 1928.94 712.82 5709.06 00:09:00.261 PCIE (0000:00:12.0) NSID 2 from core 1: 8292.31 32.39 1928.88 698.80 5959.90 00:09:00.261 PCIE (0000:00:12.0) NSID 3 from core 1: 8292.31 32.39 1928.81 575.37 5721.51 00:09:00.261 ======================================================== 00:09:00.261 Total : 49753.88 194.35 1928.82 575.37 5959.90 00:09:00.261 00:09:02.160 Initializing NVMe Controllers 00:09:02.160 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:02.160 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:02.160 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:02.160 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:02.160 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:02.160 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:02.160 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:02.160 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:02.160 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:02.160 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:02.160 Initialization complete. Launching workers. 00:09:02.160 ======================================================== 00:09:02.160 Latency(us) 00:09:02.160 Device Information : IOPS MiB/s Average min max 00:09:02.160 PCIE (0000:00:13.0) NSID 1 from core 2: 4828.76 18.86 3312.95 726.79 12434.23 00:09:02.160 PCIE (0000:00:10.0) NSID 1 from core 2: 4828.76 18.86 3311.51 725.77 12410.22 00:09:02.160 PCIE (0000:00:11.0) NSID 1 from core 2: 4828.76 18.86 3313.04 725.01 12163.89 00:09:02.160 PCIE (0000:00:12.0) NSID 1 from core 2: 4828.76 18.86 3312.64 693.55 12105.03 00:09:02.160 PCIE (0000:00:12.0) NSID 2 from core 2: 4828.76 18.86 3312.74 573.86 11910.84 00:09:02.160 PCIE (0000:00:12.0) NSID 3 from core 2: 4828.76 18.86 3312.68 457.62 12071.45 00:09:02.160 ======================================================== 00:09:02.160 Total : 28972.55 113.17 3312.59 457.62 12434.23 00:09:02.160 00:09:02.160 ************************************ 00:09:02.160 END TEST nvme_multi_secondary 00:09:02.160 ************************************ 00:09:02.160 04:14:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75644 00:09:02.160 04:14:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75645 00:09:02.160 00:09:02.160 real 0m10.684s 00:09:02.160 user 0m18.330s 00:09:02.160 sys 0m0.551s 00:09:02.160 04:14:47 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.160 04:14:47 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:02.160 04:14:47 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:02.160 04:14:47 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:02.160 04:14:47 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74601 ]] 00:09:02.160 04:14:47 nvme -- common/autotest_common.sh@1094 -- # kill 74601 00:09:02.160 04:14:47 nvme -- common/autotest_common.sh@1095 -- # wait 74601 00:09:02.160 [2024-11-17 04:14:47.555998] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 [2024-11-17 04:14:47.556078] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 [2024-11-17 04:14:47.556100] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 [2024-11-17 04:14:47.556121] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 [2024-11-17 04:14:47.556822] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 [2024-11-17 04:14:47.556876] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 [2024-11-17 04:14:47.556893] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 [2024-11-17 04:14:47.556911] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 [2024-11-17 04:14:47.557521] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 [2024-11-17 04:14:47.557573] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 [2024-11-17 04:14:47.557593] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 [2024-11-17 04:14:47.557613] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 [2024-11-17 04:14:47.558192] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 [2024-11-17 04:14:47.558422] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 [2024-11-17 04:14:47.558445] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 [2024-11-17 04:14:47.558463] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75522) is not found. Dropping the request. 00:09:02.160 04:14:47 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:02.160 04:14:47 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:02.160 04:14:47 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:02.160 04:14:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.160 04:14:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.160 04:14:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.160 ************************************ 00:09:02.160 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:02.160 ************************************ 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:02.160 * Looking for test storage... 00:09:02.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.160 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:02.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.161 --rc genhtml_branch_coverage=1 00:09:02.161 --rc genhtml_function_coverage=1 00:09:02.161 --rc genhtml_legend=1 00:09:02.161 --rc geninfo_all_blocks=1 00:09:02.161 --rc geninfo_unexecuted_blocks=1 00:09:02.161 00:09:02.161 ' 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:02.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.161 --rc genhtml_branch_coverage=1 00:09:02.161 --rc genhtml_function_coverage=1 00:09:02.161 --rc genhtml_legend=1 00:09:02.161 --rc geninfo_all_blocks=1 00:09:02.161 --rc geninfo_unexecuted_blocks=1 00:09:02.161 00:09:02.161 ' 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:02.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.161 --rc genhtml_branch_coverage=1 00:09:02.161 --rc genhtml_function_coverage=1 00:09:02.161 --rc genhtml_legend=1 00:09:02.161 --rc geninfo_all_blocks=1 00:09:02.161 --rc geninfo_unexecuted_blocks=1 00:09:02.161 00:09:02.161 ' 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:02.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.161 --rc genhtml_branch_coverage=1 00:09:02.161 --rc genhtml_function_coverage=1 00:09:02.161 --rc genhtml_legend=1 00:09:02.161 --rc geninfo_all_blocks=1 00:09:02.161 --rc geninfo_unexecuted_blocks=1 00:09:02.161 00:09:02.161 ' 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75805 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75805 00:09:02.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 75805 ']' 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.161 04:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:02.419 [2024-11-17 04:14:47.885469] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:02.419 [2024-11-17 04:14:47.885577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75805 ] 00:09:02.419 [2024-11-17 04:14:48.054416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.419 [2024-11-17 04:14:48.075409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.419 [2024-11-17 04:14:48.075652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.419 [2024-11-17 04:14:48.075750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.419 [2024-11-17 04:14:48.075778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:03.354 nvme0n1 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_QRURd.txt 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:03.354 true 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731816888 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75828 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:03.354 04:14:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:05.253 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:05.253 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.253 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:05.254 [2024-11-17 04:14:50.811263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:05.254 [2024-11-17 04:14:50.813709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:05.254 [2024-11-17 04:14:50.813761] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:05.254 [2024-11-17 04:14:50.813777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.254 [2024-11-17 04:14:50.815589] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:05.254 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75828 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75828 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75828 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_QRURd.txt 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_QRURd.txt 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75805 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 75805 ']' 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 75805 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75805 00:09:05.254 killing process with pid 75805 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75805' 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 75805 00:09:05.254 04:14:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 75805 00:09:05.512 04:14:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:05.512 04:14:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:05.512 00:09:05.512 real 0m3.526s 00:09:05.512 user 0m12.656s 00:09:05.512 sys 0m0.459s 00:09:05.513 04:14:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.513 04:14:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:05.513 ************************************ 00:09:05.513 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:05.513 ************************************ 00:09:05.513 04:14:51 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:05.513 04:14:51 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:05.513 04:14:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.513 04:14:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.513 04:14:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.513 ************************************ 00:09:05.513 START TEST nvme_fio 00:09:05.513 ************************************ 00:09:05.513 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:05.513 04:14:51 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:05.513 04:14:51 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:05.513 04:14:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:05.513 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:05.513 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:05.513 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:05.513 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:05.513 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:05.771 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:05.771 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:05.771 04:14:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:05.771 04:14:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:05.771 04:14:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:05.771 04:14:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:05.771 04:14:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:05.771 04:14:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:05.771 04:14:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:06.030 04:14:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:06.030 04:14:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:06.030 04:14:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:06.287 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:06.287 fio-3.35 00:09:06.287 Starting 1 thread 00:09:12.869 00:09:12.869 test: (groupid=0, jobs=1): err= 0: pid=75951: Sun Nov 17 04:14:58 2024 00:09:12.869 read: IOPS=24.6k, BW=96.0MiB/s (101MB/s)(192MiB/2001msec) 00:09:12.869 slat (nsec): min=4227, max=84274, avg=4917.00, stdev=2126.54 00:09:12.869 clat (usec): min=238, max=11614, avg=2603.43, stdev=786.77 00:09:12.869 lat (usec): min=242, max=11633, avg=2608.34, stdev=788.13 00:09:12.869 clat percentiles (usec): 00:09:12.869 | 1.00th=[ 1745], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2343], 00:09:12.869 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:12.869 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 2868], 95.00th=[ 4359], 00:09:12.869 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 8455], 99.95th=[ 8848], 00:09:12.869 | 99.99th=[11207] 00:09:12.869 bw ( KiB/s): min=96240, max=100936, per=100.00%, avg=99309.33, stdev=2659.71, samples=3 00:09:12.869 iops : min=24060, max=25234, avg=24827.33, stdev=664.93, samples=3 00:09:12.869 write: IOPS=24.4k, BW=95.4MiB/s (100.0MB/s)(191MiB/2001msec); 0 zone resets 00:09:12.869 slat (nsec): min=4301, max=79969, avg=5188.88, stdev=2178.55 00:09:12.869 clat (usec): min=210, max=11373, avg=2606.75, stdev=792.15 00:09:12.869 lat (usec): min=215, max=11382, avg=2611.94, stdev=793.50 00:09:12.869 clat percentiles (usec): 00:09:12.869 | 1.00th=[ 1713], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2343], 00:09:12.869 | 30.00th=[ 2376], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:12.869 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 2868], 95.00th=[ 4424], 00:09:12.869 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 8455], 99.95th=[ 8979], 00:09:12.869 | 99.99th=[11076] 00:09:12.869 bw ( KiB/s): min=95904, max=101784, per=100.00%, avg=99397.33, stdev=3092.27, samples=3 00:09:12.869 iops : min=23976, max=25446, avg=24849.33, stdev=773.07, samples=3 00:09:12.869 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.05% 00:09:12.869 lat (msec) : 2=2.39%, 4=91.83%, 10=5.67%, 20=0.02% 00:09:12.869 cpu : usr=99.25%, sys=0.10%, ctx=4, majf=0, minf=627 00:09:12.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:12.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.869 issued rwts: total=49172,48850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.869 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.869 00:09:12.869 Run status group 0 (all jobs): 00:09:12.869 READ: bw=96.0MiB/s (101MB/s), 96.0MiB/s-96.0MiB/s (101MB/s-101MB/s), io=192MiB (201MB), run=2001-2001msec 00:09:12.869 WRITE: bw=95.4MiB/s (100.0MB/s), 95.4MiB/s-95.4MiB/s (100.0MB/s-100.0MB/s), io=191MiB (200MB), run=2001-2001msec 00:09:13.126 ----------------------------------------------------- 00:09:13.126 Suppressions used: 00:09:13.126 count bytes template 00:09:13.126 1 32 /usr/src/fio/parse.c 00:09:13.126 1 8 libtcmalloc_minimal.so 00:09:13.126 ----------------------------------------------------- 00:09:13.126 00:09:13.126 04:14:58 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:13.126 04:14:58 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:13.126 04:14:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:13.126 04:14:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:13.385 04:14:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:13.385 04:14:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:13.385 04:14:59 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:13.385 04:14:59 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:13.385 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:13.385 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:13.385 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:13.385 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:13.385 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:13.385 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:13.385 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:13.385 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:13.385 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:13.385 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:13.385 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:13.647 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:13.647 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:13.647 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:13.647 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:13.647 04:14:59 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:13.647 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:13.647 fio-3.35 00:09:13.647 Starting 1 thread 00:09:20.208 00:09:20.208 test: (groupid=0, jobs=1): err= 0: pid=76007: Sun Nov 17 04:15:05 2024 00:09:20.208 read: IOPS=23.9k, BW=93.5MiB/s (98.1MB/s)(187MiB/2001msec) 00:09:20.208 slat (usec): min=4, max=244, avg= 5.11, stdev= 2.62 00:09:20.208 clat (usec): min=570, max=9471, avg=2670.69, stdev=849.65 00:09:20.208 lat (usec): min=582, max=9510, avg=2675.80, stdev=851.26 00:09:20.208 clat percentiles (usec): 00:09:20.208 | 1.00th=[ 1762], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2343], 00:09:20.208 | 30.00th=[ 2376], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:20.208 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 3458], 95.00th=[ 5080], 00:09:20.208 | 99.00th=[ 6194], 99.50th=[ 6325], 99.90th=[ 7046], 99.95th=[ 8291], 00:09:20.208 | 99.99th=[ 9372] 00:09:20.208 bw ( KiB/s): min=93160, max=101000, per=100.00%, avg=96154.67, stdev=4234.99, samples=3 00:09:20.208 iops : min=23290, max=25250, avg=24038.67, stdev=1058.75, samples=3 00:09:20.208 write: IOPS=23.8k, BW=92.9MiB/s (97.5MB/s)(186MiB/2001msec); 0 zone resets 00:09:20.208 slat (nsec): min=4335, max=57943, avg=5389.92, stdev=2454.43 00:09:20.208 clat (usec): min=729, max=9399, avg=2674.75, stdev=854.49 00:09:20.208 lat (usec): min=741, max=9416, avg=2680.14, stdev=856.13 00:09:20.209 clat percentiles (usec): 00:09:20.209 | 1.00th=[ 1745], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2343], 00:09:20.209 | 30.00th=[ 2376], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:20.209 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 3458], 95.00th=[ 5080], 00:09:20.209 | 99.00th=[ 6194], 99.50th=[ 6325], 99.90th=[ 7046], 99.95th=[ 8455], 00:09:20.209 | 99.99th=[ 9241] 00:09:20.209 bw ( KiB/s): min=92880, max=100536, per=100.00%, avg=96280.00, stdev=3899.12, samples=3 00:09:20.209 iops : min=23220, max=25134, avg=24070.00, stdev=974.78, samples=3 00:09:20.209 lat (usec) : 750=0.01%, 1000=0.04% 00:09:20.209 lat (msec) : 2=2.15%, 4=89.66%, 10=8.15% 00:09:20.209 cpu : usr=98.90%, sys=0.25%, ctx=4, majf=0, minf=626 00:09:20.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:20.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:20.209 issued rwts: total=47908,47608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:20.209 00:09:20.209 Run status group 0 (all jobs): 00:09:20.209 READ: bw=93.5MiB/s (98.1MB/s), 93.5MiB/s-93.5MiB/s (98.1MB/s-98.1MB/s), io=187MiB (196MB), run=2001-2001msec 00:09:20.209 WRITE: bw=92.9MiB/s (97.5MB/s), 92.9MiB/s-92.9MiB/s (97.5MB/s-97.5MB/s), io=186MiB (195MB), run=2001-2001msec 00:09:20.467 ----------------------------------------------------- 00:09:20.467 Suppressions used: 00:09:20.467 count bytes template 00:09:20.467 1 32 /usr/src/fio/parse.c 00:09:20.467 1 8 libtcmalloc_minimal.so 00:09:20.467 ----------------------------------------------------- 00:09:20.467 00:09:20.467 04:15:06 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:20.467 04:15:06 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:20.467 04:15:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:20.467 04:15:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:20.725 04:15:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:20.725 04:15:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:20.983 04:15:06 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:20.983 04:15:06 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:20.983 04:15:06 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:21.241 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:21.241 fio-3.35 00:09:21.241 Starting 1 thread 00:09:29.385 00:09:29.385 test: (groupid=0, jobs=1): err= 0: pid=76068: Sun Nov 17 04:15:14 2024 00:09:29.385 read: IOPS=24.5k, BW=95.9MiB/s (101MB/s)(192MiB/2001msec) 00:09:29.385 slat (nsec): min=4231, max=69240, avg=5004.83, stdev=2291.76 00:09:29.385 clat (usec): min=659, max=11093, avg=2607.67, stdev=831.67 00:09:29.385 lat (usec): min=672, max=11148, avg=2612.67, stdev=833.28 00:09:29.385 clat percentiles (usec): 00:09:29.385 | 1.00th=[ 1827], 5.00th=[ 2212], 10.00th=[ 2278], 20.00th=[ 2311], 00:09:29.385 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2409], 00:09:29.385 | 70.00th=[ 2442], 80.00th=[ 2507], 90.00th=[ 2868], 95.00th=[ 4817], 00:09:29.385 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 7504], 99.95th=[ 8160], 00:09:29.385 | 99.99th=[10945] 00:09:29.385 bw ( KiB/s): min=90240, max=102280, per=98.43%, avg=96639.00, stdev=6055.69, samples=3 00:09:29.385 iops : min=22560, max=25570, avg=24159.67, stdev=1513.91, samples=3 00:09:29.385 write: IOPS=24.4k, BW=95.2MiB/s (99.9MB/s)(191MiB/2001msec); 0 zone resets 00:09:29.385 slat (nsec): min=4325, max=80565, avg=5259.98, stdev=2294.53 00:09:29.385 clat (usec): min=704, max=10975, avg=2606.46, stdev=826.38 00:09:29.385 lat (usec): min=717, max=10994, avg=2611.72, stdev=827.93 00:09:29.385 clat percentiles (usec): 00:09:29.385 | 1.00th=[ 1827], 5.00th=[ 2245], 10.00th=[ 2278], 20.00th=[ 2311], 00:09:29.385 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2409], 00:09:29.385 | 70.00th=[ 2442], 80.00th=[ 2474], 90.00th=[ 2900], 95.00th=[ 4752], 00:09:29.385 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 7504], 99.95th=[ 8717], 00:09:29.385 | 99.99th=[10814] 00:09:29.385 bw ( KiB/s): min=89272, max=103968, per=99.22%, avg=96772.33, stdev=7352.74, samples=3 00:09:29.385 iops : min=22318, max=25992, avg=24193.00, stdev=1838.18, samples=3 00:09:29.385 lat (usec) : 750=0.01%, 1000=0.02% 00:09:29.385 lat (msec) : 2=1.94%, 4=91.77%, 10=6.24%, 20=0.03% 00:09:29.385 cpu : usr=99.40%, sys=0.00%, ctx=3, majf=0, minf=628 00:09:29.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:29.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.385 issued rwts: total=49117,48791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.385 00:09:29.385 Run status group 0 (all jobs): 00:09:29.385 READ: bw=95.9MiB/s (101MB/s), 95.9MiB/s-95.9MiB/s (101MB/s-101MB/s), io=192MiB (201MB), run=2001-2001msec 00:09:29.385 WRITE: bw=95.2MiB/s (99.9MB/s), 95.2MiB/s-95.2MiB/s (99.9MB/s-99.9MB/s), io=191MiB (200MB), run=2001-2001msec 00:09:29.385 ----------------------------------------------------- 00:09:29.385 Suppressions used: 00:09:29.385 count bytes template 00:09:29.385 1 32 /usr/src/fio/parse.c 00:09:29.385 1 8 libtcmalloc_minimal.so 00:09:29.385 ----------------------------------------------------- 00:09:29.385 00:09:29.385 04:15:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:29.385 04:15:14 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:29.385 04:15:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:29.385 04:15:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:29.385 04:15:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:29.385 04:15:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:29.385 04:15:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:29.385 04:15:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:29.385 04:15:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:29.385 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:29.385 fio-3.35 00:09:29.385 Starting 1 thread 00:09:34.667 00:09:34.668 test: (groupid=0, jobs=1): err= 0: pid=76129: Sun Nov 17 04:15:20 2024 00:09:34.668 read: IOPS=21.0k, BW=81.8MiB/s (85.8MB/s)(164MiB/2001msec) 00:09:34.668 slat (nsec): min=3410, max=94256, avg=5485.51, stdev=2544.55 00:09:34.668 clat (usec): min=217, max=12757, avg=3045.48, stdev=1050.67 00:09:34.668 lat (usec): min=221, max=12826, avg=3050.96, stdev=1052.01 00:09:34.668 clat percentiles (usec): 00:09:34.668 | 1.00th=[ 1893], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:34.668 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2737], 60.00th=[ 2868], 00:09:34.668 | 70.00th=[ 3064], 80.00th=[ 3326], 90.00th=[ 4293], 95.00th=[ 5473], 00:09:34.668 | 99.00th=[ 7111], 99.50th=[ 7701], 99.90th=[ 9241], 99.95th=[10290], 00:09:34.668 | 99.99th=[12387] 00:09:34.668 bw ( KiB/s): min=74616, max=86424, per=94.25%, avg=78984.00, stdev=6475.73, samples=3 00:09:34.668 iops : min=18654, max=21606, avg=19746.00, stdev=1618.93, samples=3 00:09:34.668 write: IOPS=20.8k, BW=81.4MiB/s (85.4MB/s)(163MiB/2001msec); 0 zone resets 00:09:34.668 slat (usec): min=3, max=123, avg= 5.65, stdev= 2.46 00:09:34.668 clat (usec): min=198, max=12425, avg=3057.52, stdev=1045.72 00:09:34.668 lat (usec): min=202, max=12444, avg=3063.17, stdev=1047.03 00:09:34.668 clat percentiles (usec): 00:09:34.668 | 1.00th=[ 1926], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:34.668 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2737], 60.00th=[ 2868], 00:09:34.668 | 70.00th=[ 3064], 80.00th=[ 3326], 90.00th=[ 4293], 95.00th=[ 5473], 00:09:34.668 | 99.00th=[ 7046], 99.50th=[ 7635], 99.90th=[ 9372], 99.95th=[10421], 00:09:34.668 | 99.99th=[12256] 00:09:34.668 bw ( KiB/s): min=74592, max=86624, per=94.85%, avg=79069.33, stdev=6579.87, samples=3 00:09:34.668 iops : min=18648, max=21656, avg=19767.33, stdev=1644.97, samples=3 00:09:34.668 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.04% 00:09:34.668 lat (msec) : 2=1.30%, 4=87.00%, 10=11.56%, 20=0.07% 00:09:34.668 cpu : usr=98.95%, sys=0.20%, ctx=2, majf=0, minf=626 00:09:34.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:34.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.668 issued rwts: total=41923,41702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.668 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.668 00:09:34.668 Run status group 0 (all jobs): 00:09:34.668 READ: bw=81.8MiB/s (85.8MB/s), 81.8MiB/s-81.8MiB/s (85.8MB/s-85.8MB/s), io=164MiB (172MB), run=2001-2001msec 00:09:34.668 WRITE: bw=81.4MiB/s (85.4MB/s), 81.4MiB/s-81.4MiB/s (85.4MB/s-85.4MB/s), io=163MiB (171MB), run=2001-2001msec 00:09:34.668 ----------------------------------------------------- 00:09:34.668 Suppressions used: 00:09:34.668 count bytes template 00:09:34.668 1 32 /usr/src/fio/parse.c 00:09:34.668 1 8 libtcmalloc_minimal.so 00:09:34.668 ----------------------------------------------------- 00:09:34.668 00:09:34.668 04:15:20 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:34.668 04:15:20 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:34.668 00:09:34.668 real 0m29.065s 00:09:34.668 user 0m18.862s 00:09:34.668 sys 0m17.916s 00:09:34.668 04:15:20 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.668 ************************************ 00:09:34.668 END TEST nvme_fio 00:09:34.668 ************************************ 00:09:34.668 04:15:20 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:34.668 00:09:34.668 real 1m37.055s 00:09:34.668 user 3m35.114s 00:09:34.668 sys 0m27.998s 00:09:34.668 04:15:20 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.668 ************************************ 00:09:34.668 END TEST nvme 00:09:34.668 ************************************ 00:09:34.668 04:15:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:34.668 04:15:20 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:34.668 04:15:20 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:34.668 04:15:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.668 04:15:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.668 04:15:20 -- common/autotest_common.sh@10 -- # set +x 00:09:34.668 ************************************ 00:09:34.668 START TEST nvme_scc 00:09:34.668 ************************************ 00:09:34.668 04:15:20 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:34.928 * Looking for test storage... 00:09:34.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:34.928 04:15:20 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:34.928 04:15:20 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:34.928 04:15:20 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:34.928 04:15:20 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.928 04:15:20 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.929 04:15:20 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.929 04:15:20 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:34.929 04:15:20 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.929 04:15:20 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:34.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.929 --rc genhtml_branch_coverage=1 00:09:34.929 --rc genhtml_function_coverage=1 00:09:34.929 --rc genhtml_legend=1 00:09:34.929 --rc geninfo_all_blocks=1 00:09:34.929 --rc geninfo_unexecuted_blocks=1 00:09:34.929 00:09:34.929 ' 00:09:34.929 04:15:20 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:34.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.929 --rc genhtml_branch_coverage=1 00:09:34.929 --rc genhtml_function_coverage=1 00:09:34.929 --rc genhtml_legend=1 00:09:34.929 --rc geninfo_all_blocks=1 00:09:34.929 --rc geninfo_unexecuted_blocks=1 00:09:34.929 00:09:34.929 ' 00:09:34.929 04:15:20 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:34.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.929 --rc genhtml_branch_coverage=1 00:09:34.929 --rc genhtml_function_coverage=1 00:09:34.929 --rc genhtml_legend=1 00:09:34.929 --rc geninfo_all_blocks=1 00:09:34.929 --rc geninfo_unexecuted_blocks=1 00:09:34.929 00:09:34.929 ' 00:09:34.929 04:15:20 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:34.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.929 --rc genhtml_branch_coverage=1 00:09:34.929 --rc genhtml_function_coverage=1 00:09:34.929 --rc genhtml_legend=1 00:09:34.929 --rc geninfo_all_blocks=1 00:09:34.929 --rc geninfo_unexecuted_blocks=1 00:09:34.929 00:09:34.929 ' 00:09:34.929 04:15:20 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:34.929 04:15:20 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:34.929 04:15:20 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:34.929 04:15:20 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:34.929 04:15:20 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.929 04:15:20 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.929 04:15:20 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.929 04:15:20 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.929 04:15:20 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.929 04:15:20 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.929 04:15:20 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.929 04:15:20 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.929 04:15:20 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:34.929 04:15:20 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.929 04:15:20 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:34.929 04:15:20 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:34.929 04:15:20 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:34.929 04:15:20 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:34.929 04:15:20 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:34.929 04:15:20 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:34.929 04:15:20 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:34.929 04:15:20 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:34.929 04:15:20 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:34.929 04:15:20 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.929 04:15:20 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:34.929 04:15:20 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:34.929 04:15:20 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:34.929 04:15:20 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:35.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.452 Waiting for block devices as requested 00:09:35.452 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.452 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.452 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.713 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.027 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:41.027 04:15:26 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:41.027 04:15:26 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:41.027 04:15:26 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:41.027 04:15:26 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.027 04:15:26 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.027 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.028 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:41.029 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.030 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:41.031 04:15:26 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:41.031 04:15:26 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:41.031 04:15:26 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.031 04:15:26 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.031 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.032 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:41.033 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.034 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.035 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:41.036 04:15:26 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:41.036 04:15:26 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:41.036 04:15:26 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.036 04:15:26 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:41.036 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.037 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.038 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:41.039 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.040 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.041 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:41.042 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:41.043 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:41.044 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:41.045 04:15:26 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:41.045 04:15:26 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:41.045 04:15:26 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.045 04:15:26 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.045 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:41.046 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:41.047 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:41.048 04:15:26 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:41.048 04:15:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:41.049 04:15:26 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:41.049 04:15:26 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:41.049 04:15:26 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:41.049 04:15:26 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:41.310 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.882 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.882 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.883 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.144 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.144 04:15:27 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:42.144 04:15:27 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:42.144 04:15:27 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.144 04:15:27 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:42.144 ************************************ 00:09:42.145 START TEST nvme_simple_copy 00:09:42.145 ************************************ 00:09:42.145 04:15:27 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:42.406 Initializing NVMe Controllers 00:09:42.406 Attaching to 0000:00:10.0 00:09:42.406 Controller supports SCC. Attached to 0000:00:10.0 00:09:42.406 Namespace ID: 1 size: 6GB 00:09:42.406 Initialization complete. 00:09:42.406 00:09:42.406 Controller QEMU NVMe Ctrl (12340 ) 00:09:42.406 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:42.406 Namespace Block Size:4096 00:09:42.406 Writing LBAs 0 to 63 with Random Data 00:09:42.406 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:42.406 LBAs matching Written Data: 64 00:09:42.406 00:09:42.406 real 0m0.259s 00:09:42.406 user 0m0.103s 00:09:42.406 sys 0m0.053s 00:09:42.406 ************************************ 00:09:42.406 END TEST nvme_simple_copy 00:09:42.406 04:15:27 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.406 04:15:27 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:42.406 ************************************ 00:09:42.406 ************************************ 00:09:42.406 END TEST nvme_scc 00:09:42.406 ************************************ 00:09:42.406 00:09:42.406 real 0m7.600s 00:09:42.406 user 0m1.039s 00:09:42.406 sys 0m1.332s 00:09:42.406 04:15:27 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.406 04:15:27 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:42.406 04:15:28 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:42.406 04:15:28 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:42.406 04:15:28 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:42.406 04:15:28 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:42.406 04:15:28 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:42.406 04:15:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.406 04:15:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.406 04:15:28 -- common/autotest_common.sh@10 -- # set +x 00:09:42.406 ************************************ 00:09:42.406 START TEST nvme_fdp 00:09:42.406 ************************************ 00:09:42.406 04:15:28 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:42.406 * Looking for test storage... 00:09:42.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:42.406 04:15:28 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:42.406 04:15:28 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:42.406 04:15:28 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:42.668 04:15:28 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:42.668 04:15:28 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.668 04:15:28 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:42.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.668 --rc genhtml_branch_coverage=1 00:09:42.668 --rc genhtml_function_coverage=1 00:09:42.668 --rc genhtml_legend=1 00:09:42.668 --rc geninfo_all_blocks=1 00:09:42.668 --rc geninfo_unexecuted_blocks=1 00:09:42.668 00:09:42.668 ' 00:09:42.668 04:15:28 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:42.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.668 --rc genhtml_branch_coverage=1 00:09:42.668 --rc genhtml_function_coverage=1 00:09:42.668 --rc genhtml_legend=1 00:09:42.668 --rc geninfo_all_blocks=1 00:09:42.668 --rc geninfo_unexecuted_blocks=1 00:09:42.668 00:09:42.668 ' 00:09:42.668 04:15:28 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:42.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.668 --rc genhtml_branch_coverage=1 00:09:42.668 --rc genhtml_function_coverage=1 00:09:42.668 --rc genhtml_legend=1 00:09:42.668 --rc geninfo_all_blocks=1 00:09:42.668 --rc geninfo_unexecuted_blocks=1 00:09:42.668 00:09:42.668 ' 00:09:42.668 04:15:28 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:42.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.668 --rc genhtml_branch_coverage=1 00:09:42.668 --rc genhtml_function_coverage=1 00:09:42.668 --rc genhtml_legend=1 00:09:42.668 --rc geninfo_all_blocks=1 00:09:42.668 --rc geninfo_unexecuted_blocks=1 00:09:42.668 00:09:42.668 ' 00:09:42.668 04:15:28 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:42.668 04:15:28 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:42.668 04:15:28 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:42.668 04:15:28 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:42.668 04:15:28 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.668 04:15:28 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.668 04:15:28 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.668 04:15:28 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.668 04:15:28 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.668 04:15:28 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:42.669 04:15:28 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.669 04:15:28 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:42.669 04:15:28 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:42.669 04:15:28 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:42.669 04:15:28 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:42.669 04:15:28 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:42.669 04:15:28 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:42.669 04:15:28 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:42.669 04:15:28 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:42.669 04:15:28 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:42.669 04:15:28 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.669 04:15:28 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:42.930 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:42.931 Waiting for block devices as requested 00:09:42.931 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.192 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.192 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.192 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.484 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:48.484 04:15:33 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:48.484 04:15:33 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:48.484 04:15:33 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:48.484 04:15:33 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:48.484 04:15:33 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:48.484 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.485 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:48.486 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:48.487 04:15:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:48.487 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:48.487 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.487 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.487 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.488 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:48.489 04:15:34 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:48.489 04:15:34 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:48.489 04:15:34 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:48.489 04:15:34 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:48.489 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:48.490 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:48.491 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.492 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.493 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:48.494 04:15:34 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:48.494 04:15:34 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:48.494 04:15:34 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:48.494 04:15:34 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.494 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.495 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.496 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:48.497 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.498 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.499 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:48.500 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:48.501 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:48.502 04:15:34 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:48.502 04:15:34 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:48.502 04:15:34 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:48.502 04:15:34 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.502 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.503 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:48.504 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:48.505 04:15:34 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:48.505 04:15:34 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:48.767 04:15:34 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:48.767 04:15:34 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:48.767 04:15:34 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:48.767 04:15:34 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:49.027 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:49.599 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.599 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.599 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.599 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.599 04:15:35 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:49.599 04:15:35 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:49.599 04:15:35 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.599 04:15:35 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:49.599 ************************************ 00:09:49.599 START TEST nvme_flexible_data_placement 00:09:49.599 ************************************ 00:09:49.599 04:15:35 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:49.861 Initializing NVMe Controllers 00:09:49.861 Attaching to 0000:00:13.0 00:09:49.861 Controller supports FDP Attached to 0000:00:13.0 00:09:49.861 Namespace ID: 1 Endurance Group ID: 1 00:09:49.861 Initialization complete. 00:09:49.861 00:09:49.861 ================================== 00:09:49.861 == FDP tests for Namespace: #01 == 00:09:49.861 ================================== 00:09:49.861 00:09:49.861 Get Feature: FDP: 00:09:49.861 ================= 00:09:49.861 Enabled: Yes 00:09:49.861 FDP configuration Index: 0 00:09:49.861 00:09:49.861 FDP configurations log page 00:09:49.861 =========================== 00:09:49.861 Number of FDP configurations: 1 00:09:49.861 Version: 0 00:09:49.861 Size: 112 00:09:49.861 FDP Configuration Descriptor: 0 00:09:49.861 Descriptor Size: 96 00:09:49.861 Reclaim Group Identifier format: 2 00:09:49.861 FDP Volatile Write Cache: Not Present 00:09:49.861 FDP Configuration: Valid 00:09:49.861 Vendor Specific Size: 0 00:09:49.861 Number of Reclaim Groups: 2 00:09:49.861 Number of Recalim Unit Handles: 8 00:09:49.861 Max Placement Identifiers: 128 00:09:49.861 Number of Namespaces Suppprted: 256 00:09:49.861 Reclaim unit Nominal Size: 6000000 bytes 00:09:49.861 Estimated Reclaim Unit Time Limit: Not Reported 00:09:49.861 RUH Desc #000: RUH Type: Initially Isolated 00:09:49.861 RUH Desc #001: RUH Type: Initially Isolated 00:09:49.861 RUH Desc #002: RUH Type: Initially Isolated 00:09:49.861 RUH Desc #003: RUH Type: Initially Isolated 00:09:49.861 RUH Desc #004: RUH Type: Initially Isolated 00:09:49.861 RUH Desc #005: RUH Type: Initially Isolated 00:09:49.861 RUH Desc #006: RUH Type: Initially Isolated 00:09:49.861 RUH Desc #007: RUH Type: Initially Isolated 00:09:49.861 00:09:49.861 FDP reclaim unit handle usage log page 00:09:49.861 ====================================== 00:09:49.861 Number of Reclaim Unit Handles: 8 00:09:49.861 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:49.861 RUH Usage Desc #001: RUH Attributes: Unused 00:09:49.861 RUH Usage Desc #002: RUH Attributes: Unused 00:09:49.861 RUH Usage Desc #003: RUH Attributes: Unused 00:09:49.861 RUH Usage Desc #004: RUH Attributes: Unused 00:09:49.861 RUH Usage Desc #005: RUH Attributes: Unused 00:09:49.861 RUH Usage Desc #006: RUH Attributes: Unused 00:09:49.861 RUH Usage Desc #007: RUH Attributes: Unused 00:09:49.861 00:09:49.861 FDP statistics log page 00:09:49.861 ======================= 00:09:49.861 Host bytes with metadata written: 2072121344 00:09:49.861 Media bytes with metadata written: 2072465408 00:09:49.861 Media bytes erased: 0 00:09:49.861 00:09:49.861 FDP Reclaim unit handle status 00:09:49.861 ============================== 00:09:49.861 Number of RUHS descriptors: 2 00:09:49.861 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000027df 00:09:49.861 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:49.861 00:09:49.861 FDP write on placement id: 0 success 00:09:49.861 00:09:49.861 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:49.861 00:09:49.861 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:49.861 00:09:49.861 Get Feature: FDP Events for Placement handle: #0 00:09:49.861 ======================== 00:09:49.861 Number of FDP Events: 6 00:09:49.861 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:49.861 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:49.861 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:49.861 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:49.861 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:49.861 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:49.861 00:09:49.861 FDP events log page 00:09:49.861 =================== 00:09:49.861 Number of FDP events: 1 00:09:49.861 FDP Event #0: 00:09:49.861 Event Type: RU Not Written to Capacity 00:09:49.861 Placement Identifier: Valid 00:09:49.861 NSID: Valid 00:09:49.861 Location: Valid 00:09:49.861 Placement Identifier: 0 00:09:49.861 Event Timestamp: 3 00:09:49.861 Namespace Identifier: 1 00:09:49.861 Reclaim Group Identifier: 0 00:09:49.861 Reclaim Unit Handle Identifier: 0 00:09:49.861 00:09:49.861 FDP test passed 00:09:49.861 00:09:49.861 real 0m0.230s 00:09:49.861 user 0m0.070s 00:09:49.861 sys 0m0.058s 00:09:49.861 04:15:35 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.861 ************************************ 00:09:49.861 END TEST nvme_flexible_data_placement 00:09:49.861 ************************************ 00:09:49.861 04:15:35 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:50.123 ************************************ 00:09:50.123 END TEST nvme_fdp 00:09:50.123 ************************************ 00:09:50.123 00:09:50.123 real 0m7.565s 00:09:50.123 user 0m0.947s 00:09:50.123 sys 0m1.377s 00:09:50.123 04:15:35 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.123 04:15:35 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:50.123 04:15:35 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:50.123 04:15:35 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:50.123 04:15:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.123 04:15:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.123 04:15:35 -- common/autotest_common.sh@10 -- # set +x 00:09:50.123 ************************************ 00:09:50.123 START TEST nvme_rpc 00:09:50.123 ************************************ 00:09:50.123 04:15:35 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:50.123 * Looking for test storage... 00:09:50.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:50.123 04:15:35 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.123 04:15:35 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.123 04:15:35 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.123 04:15:35 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.123 04:15:35 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:50.123 04:15:35 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.123 04:15:35 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.123 --rc genhtml_branch_coverage=1 00:09:50.123 --rc genhtml_function_coverage=1 00:09:50.123 --rc genhtml_legend=1 00:09:50.123 --rc geninfo_all_blocks=1 00:09:50.123 --rc geninfo_unexecuted_blocks=1 00:09:50.123 00:09:50.124 ' 00:09:50.124 04:15:35 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.124 --rc genhtml_branch_coverage=1 00:09:50.124 --rc genhtml_function_coverage=1 00:09:50.124 --rc genhtml_legend=1 00:09:50.124 --rc geninfo_all_blocks=1 00:09:50.124 --rc geninfo_unexecuted_blocks=1 00:09:50.124 00:09:50.124 ' 00:09:50.124 04:15:35 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.124 --rc genhtml_branch_coverage=1 00:09:50.124 --rc genhtml_function_coverage=1 00:09:50.124 --rc genhtml_legend=1 00:09:50.124 --rc geninfo_all_blocks=1 00:09:50.124 --rc geninfo_unexecuted_blocks=1 00:09:50.124 00:09:50.124 ' 00:09:50.124 04:15:35 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.124 --rc genhtml_branch_coverage=1 00:09:50.124 --rc genhtml_function_coverage=1 00:09:50.124 --rc genhtml_legend=1 00:09:50.124 --rc geninfo_all_blocks=1 00:09:50.124 --rc geninfo_unexecuted_blocks=1 00:09:50.124 00:09:50.124 ' 00:09:50.124 04:15:35 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:50.124 04:15:35 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:50.124 04:15:35 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:50.124 04:15:35 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:50.124 04:15:35 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:50.124 04:15:35 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:50.124 04:15:35 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:50.124 04:15:35 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:50.124 04:15:35 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:50.124 04:15:35 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:50.124 04:15:35 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:50.385 04:15:35 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:50.385 04:15:35 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:50.385 04:15:35 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:50.385 04:15:35 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:50.385 04:15:35 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77485 00:09:50.385 04:15:35 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:50.385 04:15:35 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77485 00:09:50.385 04:15:35 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:50.385 04:15:35 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77485 ']' 00:09:50.385 04:15:35 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.385 04:15:35 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.385 04:15:35 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.385 04:15:35 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.385 04:15:35 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.385 [2024-11-17 04:15:35.940460] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:50.385 [2024-11-17 04:15:35.940570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77485 ] 00:09:50.385 [2024-11-17 04:15:36.099077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:50.647 [2024-11-17 04:15:36.119125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.647 [2024-11-17 04:15:36.119161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.220 04:15:36 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.220 04:15:36 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:51.220 04:15:36 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:51.479 Nvme0n1 00:09:51.479 04:15:37 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:51.479 04:15:37 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:51.740 request: 00:09:51.740 { 00:09:51.740 "bdev_name": "Nvme0n1", 00:09:51.740 "filename": "non_existing_file", 00:09:51.740 "method": "bdev_nvme_apply_firmware", 00:09:51.740 "req_id": 1 00:09:51.740 } 00:09:51.740 Got JSON-RPC error response 00:09:51.740 response: 00:09:51.740 { 00:09:51.740 "code": -32603, 00:09:51.740 "message": "open file failed." 00:09:51.740 } 00:09:51.740 04:15:37 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:51.740 04:15:37 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:51.740 04:15:37 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:51.740 04:15:37 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:51.740 04:15:37 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77485 00:09:51.740 04:15:37 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77485 ']' 00:09:51.740 04:15:37 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77485 00:09:51.740 04:15:37 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:51.740 04:15:37 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.740 04:15:37 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77485 00:09:51.740 04:15:37 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.740 killing process with pid 77485 00:09:51.740 04:15:37 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.740 04:15:37 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77485' 00:09:51.740 04:15:37 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77485 00:09:51.740 04:15:37 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77485 00:09:52.001 00:09:52.001 real 0m2.018s 00:09:52.001 user 0m3.911s 00:09:52.001 sys 0m0.456s 00:09:52.001 04:15:37 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.001 ************************************ 00:09:52.001 END TEST nvme_rpc 00:09:52.001 ************************************ 00:09:52.001 04:15:37 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.001 04:15:37 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:52.001 04:15:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.001 04:15:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.001 04:15:37 -- common/autotest_common.sh@10 -- # set +x 00:09:52.262 ************************************ 00:09:52.262 START TEST nvme_rpc_timeouts 00:09:52.262 ************************************ 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:52.262 * Looking for test storage... 00:09:52.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.262 04:15:37 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:52.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.262 --rc genhtml_branch_coverage=1 00:09:52.262 --rc genhtml_function_coverage=1 00:09:52.262 --rc genhtml_legend=1 00:09:52.262 --rc geninfo_all_blocks=1 00:09:52.262 --rc geninfo_unexecuted_blocks=1 00:09:52.262 00:09:52.262 ' 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:52.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.262 --rc genhtml_branch_coverage=1 00:09:52.262 --rc genhtml_function_coverage=1 00:09:52.262 --rc genhtml_legend=1 00:09:52.262 --rc geninfo_all_blocks=1 00:09:52.262 --rc geninfo_unexecuted_blocks=1 00:09:52.262 00:09:52.262 ' 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:52.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.262 --rc genhtml_branch_coverage=1 00:09:52.262 --rc genhtml_function_coverage=1 00:09:52.262 --rc genhtml_legend=1 00:09:52.262 --rc geninfo_all_blocks=1 00:09:52.262 --rc geninfo_unexecuted_blocks=1 00:09:52.262 00:09:52.262 ' 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:52.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.262 --rc genhtml_branch_coverage=1 00:09:52.262 --rc genhtml_function_coverage=1 00:09:52.262 --rc genhtml_legend=1 00:09:52.262 --rc geninfo_all_blocks=1 00:09:52.262 --rc geninfo_unexecuted_blocks=1 00:09:52.262 00:09:52.262 ' 00:09:52.262 04:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.262 04:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77534 00:09:52.262 04:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77534 00:09:52.262 04:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77566 00:09:52.262 04:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:52.262 04:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77566 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77566 ']' 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.262 04:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:52.262 04:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:52.262 [2024-11-17 04:15:37.954893] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:52.262 [2024-11-17 04:15:37.955008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77566 ] 00:09:52.527 [2024-11-17 04:15:38.112765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:52.527 [2024-11-17 04:15:38.132734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.527 [2024-11-17 04:15:38.132774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.099 04:15:38 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.099 Checking default timeout settings: 00:09:53.099 04:15:38 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:53.099 04:15:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:53.099 04:15:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:53.672 Making settings changes with rpc: 00:09:53.672 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:53.672 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:53.672 Check default vs. modified settings: 00:09:53.672 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:53.672 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:53.935 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:53.935 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:53.935 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77534 00:09:53.935 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:53.935 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:53.935 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:53.935 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77534 00:09:53.935 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:53.935 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.238 Setting action_on_timeout is changed as expected. 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77534 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77534 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.238 Setting timeout_us is changed as expected. 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77534 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77534 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.238 Setting timeout_admin_us is changed as expected. 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77534 /tmp/settings_modified_77534 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77566 00:09:54.238 04:15:39 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77566 ']' 00:09:54.238 04:15:39 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77566 00:09:54.238 04:15:39 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:54.238 04:15:39 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.238 04:15:39 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77566 00:09:54.238 04:15:39 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.238 04:15:39 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.238 killing process with pid 77566 00:09:54.238 04:15:39 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77566' 00:09:54.238 04:15:39 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77566 00:09:54.238 04:15:39 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77566 00:09:54.238 RPC TIMEOUT SETTING TEST PASSED. 00:09:54.238 04:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:54.238 00:09:54.238 real 0m2.225s 00:09:54.238 user 0m4.510s 00:09:54.238 sys 0m0.449s 00:09:54.238 04:15:39 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.238 ************************************ 00:09:54.238 END TEST nvme_rpc_timeouts 00:09:54.238 ************************************ 00:09:54.238 04:15:39 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:54.499 04:15:40 -- spdk/autotest.sh@239 -- # uname -s 00:09:54.499 04:15:40 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:54.499 04:15:40 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:54.499 04:15:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.499 04:15:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.499 04:15:40 -- common/autotest_common.sh@10 -- # set +x 00:09:54.499 ************************************ 00:09:54.499 START TEST sw_hotplug 00:09:54.499 ************************************ 00:09:54.499 04:15:40 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:54.499 * Looking for test storage... 00:09:54.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:54.499 04:15:40 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:54.499 04:15:40 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:54.499 04:15:40 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:54.499 04:15:40 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.499 04:15:40 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:54.499 04:15:40 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.499 04:15:40 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:54.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.499 --rc genhtml_branch_coverage=1 00:09:54.499 --rc genhtml_function_coverage=1 00:09:54.499 --rc genhtml_legend=1 00:09:54.499 --rc geninfo_all_blocks=1 00:09:54.499 --rc geninfo_unexecuted_blocks=1 00:09:54.499 00:09:54.499 ' 00:09:54.499 04:15:40 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:54.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.499 --rc genhtml_branch_coverage=1 00:09:54.499 --rc genhtml_function_coverage=1 00:09:54.499 --rc genhtml_legend=1 00:09:54.499 --rc geninfo_all_blocks=1 00:09:54.499 --rc geninfo_unexecuted_blocks=1 00:09:54.499 00:09:54.499 ' 00:09:54.499 04:15:40 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:54.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.499 --rc genhtml_branch_coverage=1 00:09:54.499 --rc genhtml_function_coverage=1 00:09:54.499 --rc genhtml_legend=1 00:09:54.499 --rc geninfo_all_blocks=1 00:09:54.499 --rc geninfo_unexecuted_blocks=1 00:09:54.499 00:09:54.499 ' 00:09:54.499 04:15:40 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:54.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.499 --rc genhtml_branch_coverage=1 00:09:54.499 --rc genhtml_function_coverage=1 00:09:54.499 --rc genhtml_legend=1 00:09:54.499 --rc geninfo_all_blocks=1 00:09:54.499 --rc geninfo_unexecuted_blocks=1 00:09:54.499 00:09:54.499 ' 00:09:54.499 04:15:40 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:54.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:55.020 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:55.020 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:55.020 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:55.020 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:55.020 04:15:40 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:55.020 04:15:40 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:55.020 04:15:40 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:55.020 04:15:40 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:55.020 04:15:40 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:55.020 04:15:40 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:55.020 04:15:40 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:55.020 04:15:40 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:55.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:55.540 Waiting for block devices as requested 00:09:55.540 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.540 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.800 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.800 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:01.085 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:01.085 04:15:46 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:01.085 04:15:46 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:01.345 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:01.345 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:01.345 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:01.605 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:01.865 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.865 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.865 04:15:47 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:01.866 04:15:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:02.126 04:15:47 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:02.126 04:15:47 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:02.126 04:15:47 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78417 00:10:02.126 04:15:47 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:02.126 04:15:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:02.126 04:15:47 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:02.126 04:15:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:02.126 04:15:47 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:02.126 04:15:47 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:02.126 04:15:47 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:02.126 04:15:47 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:02.126 04:15:47 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:02.126 04:15:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:02.126 04:15:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:02.126 04:15:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:02.126 04:15:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:02.126 04:15:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:02.126 Initializing NVMe Controllers 00:10:02.126 Attaching to 0000:00:10.0 00:10:02.126 Attaching to 0000:00:11.0 00:10:02.126 Attached to 0000:00:11.0 00:10:02.126 Attached to 0000:00:10.0 00:10:02.126 Initialization complete. Starting I/O... 00:10:02.126 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:02.126 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:02.126 00:10:03.509 QEMU NVMe Ctrl (12341 ): 2808 I/Os completed (+2808) 00:10:03.509 QEMU NVMe Ctrl (12340 ): 2808 I/Os completed (+2808) 00:10:03.509 00:10:04.453 QEMU NVMe Ctrl (12341 ): 5898 I/Os completed (+3090) 00:10:04.453 QEMU NVMe Ctrl (12340 ): 5896 I/Os completed (+3088) 00:10:04.453 00:10:05.398 QEMU NVMe Ctrl (12341 ): 8902 I/Os completed (+3004) 00:10:05.398 QEMU NVMe Ctrl (12340 ): 8896 I/Os completed (+3000) 00:10:05.398 00:10:06.345 QEMU NVMe Ctrl (12341 ): 12398 I/Os completed (+3496) 00:10:06.345 QEMU NVMe Ctrl (12340 ): 12409 I/Os completed (+3513) 00:10:06.345 00:10:07.289 QEMU NVMe Ctrl (12341 ): 16034 I/Os completed (+3636) 00:10:07.289 QEMU NVMe Ctrl (12340 ): 16049 I/Os completed (+3640) 00:10:07.289 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:08.230 [2024-11-17 04:15:53.654505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:08.230 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:08.230 [2024-11-17 04:15:53.655470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 [2024-11-17 04:15:53.655511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 [2024-11-17 04:15:53.655524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 [2024-11-17 04:15:53.655537] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:08.230 [2024-11-17 04:15:53.656436] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 [2024-11-17 04:15:53.656467] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 [2024-11-17 04:15:53.656478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 [2024-11-17 04:15:53.656491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:08.230 [2024-11-17 04:15:53.679106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:08.230 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:08.230 [2024-11-17 04:15:53.679845] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 [2024-11-17 04:15:53.679875] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 [2024-11-17 04:15:53.679888] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 [2024-11-17 04:15:53.679914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:08.230 [2024-11-17 04:15:53.680735] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 [2024-11-17 04:15:53.680760] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 [2024-11-17 04:15:53.680775] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 [2024-11-17 04:15:53.680785] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:08.230 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:08.230 EAL: Scan for (pci) bus failed. 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:08.230 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:08.230 Attaching to 0000:00:10.0 00:10:08.230 Attached to 0000:00:10.0 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:08.230 04:15:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:08.230 Attaching to 0000:00:11.0 00:10:08.230 Attached to 0000:00:11.0 00:10:09.169 QEMU NVMe Ctrl (12340 ): 3911 I/Os completed (+3911) 00:10:09.169 QEMU NVMe Ctrl (12341 ): 3607 I/Os completed (+3607) 00:10:09.169 00:10:10.554 QEMU NVMe Ctrl (12340 ): 7541 I/Os completed (+3630) 00:10:10.554 QEMU NVMe Ctrl (12341 ): 7298 I/Os completed (+3691) 00:10:10.554 00:10:11.125 QEMU NVMe Ctrl (12340 ): 11169 I/Os completed (+3628) 00:10:11.125 QEMU NVMe Ctrl (12341 ): 10929 I/Os completed (+3631) 00:10:11.125 00:10:12.509 QEMU NVMe Ctrl (12340 ): 14872 I/Os completed (+3703) 00:10:12.509 QEMU NVMe Ctrl (12341 ): 14638 I/Os completed (+3709) 00:10:12.509 00:10:13.450 QEMU NVMe Ctrl (12340 ): 18552 I/Os completed (+3680) 00:10:13.450 QEMU NVMe Ctrl (12341 ): 18318 I/Os completed (+3680) 00:10:13.450 00:10:14.381 QEMU NVMe Ctrl (12340 ): 22731 I/Os completed (+4179) 00:10:14.381 QEMU NVMe Ctrl (12341 ): 22472 I/Os completed (+4154) 00:10:14.381 00:10:15.317 QEMU NVMe Ctrl (12340 ): 26957 I/Os completed (+4226) 00:10:15.317 QEMU NVMe Ctrl (12341 ): 26696 I/Os completed (+4224) 00:10:15.317 00:10:16.248 QEMU NVMe Ctrl (12340 ): 31177 I/Os completed (+4220) 00:10:16.248 QEMU NVMe Ctrl (12341 ): 30905 I/Os completed (+4209) 00:10:16.248 00:10:17.182 QEMU NVMe Ctrl (12340 ): 35414 I/Os completed (+4237) 00:10:17.182 QEMU NVMe Ctrl (12341 ): 35112 I/Os completed (+4207) 00:10:17.182 00:10:18.599 QEMU NVMe Ctrl (12340 ): 39635 I/Os completed (+4221) 00:10:18.599 QEMU NVMe Ctrl (12341 ): 39324 I/Os completed (+4212) 00:10:18.599 00:10:19.165 QEMU NVMe Ctrl (12340 ): 43853 I/Os completed (+4218) 00:10:19.165 QEMU NVMe Ctrl (12341 ): 43523 I/Os completed (+4199) 00:10:19.165 00:10:20.540 QEMU NVMe Ctrl (12340 ): 48095 I/Os completed (+4242) 00:10:20.540 QEMU NVMe Ctrl (12341 ): 47752 I/Os completed (+4229) 00:10:20.540 00:10:20.540 04:16:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:20.540 04:16:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:20.540 04:16:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:20.540 04:16:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:20.540 [2024-11-17 04:16:05.950691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:20.540 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:20.540 [2024-11-17 04:16:05.951496] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 [2024-11-17 04:16:05.951532] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 [2024-11-17 04:16:05.951546] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 [2024-11-17 04:16:05.951564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:20.540 [2024-11-17 04:16:05.952672] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 [2024-11-17 04:16:05.952701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 [2024-11-17 04:16:05.952712] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 [2024-11-17 04:16:05.952724] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 04:16:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:20.540 04:16:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:20.540 [2024-11-17 04:16:05.971110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:20.540 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:20.540 [2024-11-17 04:16:05.971877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 [2024-11-17 04:16:05.971914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 [2024-11-17 04:16:05.971929] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 [2024-11-17 04:16:05.971941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:20.540 [2024-11-17 04:16:05.972800] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 [2024-11-17 04:16:05.972828] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 [2024-11-17 04:16:05.972842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 [2024-11-17 04:16:05.972851] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.540 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/class 00:10:20.540 EAL: Scan for (pci) bus failed. 00:10:20.540 04:16:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:20.540 04:16:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:20.540 04:16:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:20.540 04:16:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:20.540 04:16:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:20.540 04:16:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:20.540 04:16:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:20.540 04:16:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:20.540 04:16:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:20.540 04:16:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:20.540 Attaching to 0000:00:10.0 00:10:20.540 Attached to 0000:00:10.0 00:10:20.540 04:16:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:20.540 04:16:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:20.540 04:16:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:20.540 Attaching to 0000:00:11.0 00:10:20.540 Attached to 0000:00:11.0 00:10:21.474 QEMU NVMe Ctrl (12340 ): 3027 I/Os completed (+3027) 00:10:21.474 QEMU NVMe Ctrl (12341 ): 2675 I/Os completed (+2675) 00:10:21.474 00:10:22.408 QEMU NVMe Ctrl (12340 ): 7224 I/Os completed (+4197) 00:10:22.408 QEMU NVMe Ctrl (12341 ): 6916 I/Os completed (+4241) 00:10:22.408 00:10:23.345 QEMU NVMe Ctrl (12340 ): 11447 I/Os completed (+4223) 00:10:23.345 QEMU NVMe Ctrl (12341 ): 11128 I/Os completed (+4212) 00:10:23.345 00:10:24.305 QEMU NVMe Ctrl (12340 ): 15166 I/Os completed (+3719) 00:10:24.305 QEMU NVMe Ctrl (12341 ): 14907 I/Os completed (+3779) 00:10:24.305 00:10:25.248 QEMU NVMe Ctrl (12340 ): 18902 I/Os completed (+3736) 00:10:25.248 QEMU NVMe Ctrl (12341 ): 18654 I/Os completed (+3747) 00:10:25.248 00:10:26.188 QEMU NVMe Ctrl (12340 ): 22948 I/Os completed (+4046) 00:10:26.188 QEMU NVMe Ctrl (12341 ): 22719 I/Os completed (+4065) 00:10:26.188 00:10:27.561 QEMU NVMe Ctrl (12340 ): 27218 I/Os completed (+4270) 00:10:27.561 QEMU NVMe Ctrl (12341 ): 26983 I/Os completed (+4264) 00:10:27.561 00:10:28.126 QEMU NVMe Ctrl (12340 ): 31471 I/Os completed (+4253) 00:10:28.126 QEMU NVMe Ctrl (12341 ): 31221 I/Os completed (+4238) 00:10:28.126 00:10:29.515 QEMU NVMe Ctrl (12340 ): 35597 I/Os completed (+4126) 00:10:29.515 QEMU NVMe Ctrl (12341 ): 35362 I/Os completed (+4141) 00:10:29.515 00:10:30.489 QEMU NVMe Ctrl (12340 ): 39159 I/Os completed (+3562) 00:10:30.489 QEMU NVMe Ctrl (12341 ): 38938 I/Os completed (+3576) 00:10:30.489 00:10:31.423 QEMU NVMe Ctrl (12340 ): 42649 I/Os completed (+3490) 00:10:31.423 QEMU NVMe Ctrl (12341 ): 42419 I/Os completed (+3481) 00:10:31.423 00:10:32.356 QEMU NVMe Ctrl (12340 ): 46828 I/Os completed (+4179) 00:10:32.356 QEMU NVMe Ctrl (12341 ): 46607 I/Os completed (+4188) 00:10:32.356 00:10:32.614 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:32.614 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:32.614 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:32.614 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:32.614 [2024-11-17 04:16:18.210682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:32.614 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:32.614 [2024-11-17 04:16:18.211485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 [2024-11-17 04:16:18.211518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 [2024-11-17 04:16:18.211530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 [2024-11-17 04:16:18.211549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:32.614 [2024-11-17 04:16:18.212616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 [2024-11-17 04:16:18.212642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 [2024-11-17 04:16:18.212652] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 [2024-11-17 04:16:18.212664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:32.614 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:32.614 [2024-11-17 04:16:18.230109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:32.614 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:32.614 [2024-11-17 04:16:18.230849] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 [2024-11-17 04:16:18.230878] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 [2024-11-17 04:16:18.230893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 [2024-11-17 04:16:18.230905] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:32.614 [2024-11-17 04:16:18.231715] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 [2024-11-17 04:16:18.231739] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 [2024-11-17 04:16:18.231751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 [2024-11-17 04:16:18.231760] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.614 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:32.614 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:32.614 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:32.614 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:32.614 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:32.873 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:32.873 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:32.874 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:32.874 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:32.874 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:32.874 Attaching to 0000:00:10.0 00:10:32.874 Attached to 0000:00:10.0 00:10:32.874 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:32.874 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:32.874 04:16:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:32.874 Attaching to 0000:00:11.0 00:10:32.874 Attached to 0000:00:11.0 00:10:32.874 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:32.874 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:32.874 [2024-11-17 04:16:18.483249] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:45.103 04:16:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:45.103 04:16:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:45.103 04:16:30 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.83 00:10:45.104 04:16:30 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.83 00:10:45.104 04:16:30 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:45.104 04:16:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.83 00:10:45.104 04:16:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.83 2 00:10:45.104 remove_attach_helper took 42.83s to complete (handling 2 nvme drive(s)) 04:16:30 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:51.688 04:16:36 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78417 00:10:51.688 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78417) - No such process 00:10:51.688 04:16:36 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78417 00:10:51.688 04:16:36 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:51.688 04:16:36 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:51.688 04:16:36 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:51.688 04:16:36 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=78961 00:10:51.688 04:16:36 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:51.688 04:16:36 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 78961 00:10:51.688 04:16:36 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:51.688 04:16:36 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 78961 ']' 00:10:51.688 04:16:36 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.688 04:16:36 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.688 04:16:36 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.688 04:16:36 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.688 04:16:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.688 [2024-11-17 04:16:36.563067] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:51.688 [2024-11-17 04:16:36.563183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78961 ] 00:10:51.688 [2024-11-17 04:16:36.720227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.688 [2024-11-17 04:16:36.738965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.688 04:16:37 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.688 04:16:37 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:51.688 04:16:37 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:51.688 04:16:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.688 04:16:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.688 04:16:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.688 04:16:37 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:51.688 04:16:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:51.688 04:16:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:51.949 04:16:37 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:51.949 04:16:37 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:51.949 04:16:37 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:51.949 04:16:37 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:51.949 04:16:37 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:51.949 04:16:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:51.949 04:16:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:51.949 04:16:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:51.949 04:16:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:51.949 04:16:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:58.512 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:58.512 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.512 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.512 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.512 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.512 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:58.512 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:58.512 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:58.512 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:58.512 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:58.512 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:58.512 04:16:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.512 04:16:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.513 04:16:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.513 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:58.513 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:58.513 [2024-11-17 04:16:43.503781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:58.513 [2024-11-17 04:16:43.504859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.513 [2024-11-17 04:16:43.504891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.513 [2024-11-17 04:16:43.504932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.513 [2024-11-17 04:16:43.504944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.513 [2024-11-17 04:16:43.504955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.513 [2024-11-17 04:16:43.504962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.513 [2024-11-17 04:16:43.504971] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.513 [2024-11-17 04:16:43.504977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.513 [2024-11-17 04:16:43.504985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.513 [2024-11-17 04:16:43.504992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.513 [2024-11-17 04:16:43.504999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.513 [2024-11-17 04:16:43.505005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.513 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:58.513 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:58.513 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:58.513 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:58.513 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:58.513 04:16:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:58.513 04:16:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.513 04:16:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.513 04:16:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.513 [2024-11-17 04:16:44.003781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:58.513 [2024-11-17 04:16:44.004961] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.513 [2024-11-17 04:16:44.004993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.513 [2024-11-17 04:16:44.005003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.513 [2024-11-17 04:16:44.005015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.513 [2024-11-17 04:16:44.005021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.513 [2024-11-17 04:16:44.005029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.513 [2024-11-17 04:16:44.005036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.513 [2024-11-17 04:16:44.005044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.513 [2024-11-17 04:16:44.005050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.513 [2024-11-17 04:16:44.005058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.513 [2024-11-17 04:16:44.005065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.513 [2024-11-17 04:16:44.005072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.513 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:58.513 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:59.079 04:16:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.079 04:16:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.079 04:16:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:59.079 04:16:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.368 04:16:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.368 04:16:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.368 04:16:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.368 04:16:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.368 04:16:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.368 04:16:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:11.368 04:16:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:11.368 [2024-11-17 04:16:56.903983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:11.368 [2024-11-17 04:16:56.905048] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.369 [2024-11-17 04:16:56.905080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.369 [2024-11-17 04:16:56.905092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.369 [2024-11-17 04:16:56.905104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.369 [2024-11-17 04:16:56.905112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.369 [2024-11-17 04:16:56.905119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.369 [2024-11-17 04:16:56.905127] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.369 [2024-11-17 04:16:56.905148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.369 [2024-11-17 04:16:56.905156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.369 [2024-11-17 04:16:56.905163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.369 [2024-11-17 04:16:56.905170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.369 [2024-11-17 04:16:56.905177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.627 [2024-11-17 04:16:57.303988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:11.627 [2024-11-17 04:16:57.305120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.627 [2024-11-17 04:16:57.305153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.627 [2024-11-17 04:16:57.305164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.627 [2024-11-17 04:16:57.305174] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.627 [2024-11-17 04:16:57.305181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.627 [2024-11-17 04:16:57.305189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.627 [2024-11-17 04:16:57.305195] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.627 [2024-11-17 04:16:57.305203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.627 [2024-11-17 04:16:57.305209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.627 [2024-11-17 04:16:57.305216] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.627 [2024-11-17 04:16:57.305222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.627 [2024-11-17 04:16:57.305230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.885 04:16:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.885 04:16:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.885 04:16:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.885 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:12.142 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:12.142 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.142 04:16:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:24.339 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:24.339 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:24.339 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:24.339 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.339 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.339 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.339 04:17:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.339 04:17:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.339 04:17:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.339 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:24.339 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:24.339 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.339 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.339 [2024-11-17 04:17:09.704195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:24.339 [2024-11-17 04:17:09.707383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.339 [2024-11-17 04:17:09.707505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.339 [2024-11-17 04:17:09.707577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.339 [2024-11-17 04:17:09.707642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.339 [2024-11-17 04:17:09.707663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.339 [2024-11-17 04:17:09.707687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.339 [2024-11-17 04:17:09.707803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.339 [2024-11-17 04:17:09.707821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.339 [2024-11-17 04:17:09.707882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.339 [2024-11-17 04:17:09.707909] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.339 [2024-11-17 04:17:09.707926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.339 [2024-11-17 04:17:09.707993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.339 [2024-11-17 04:17:09.708068] bdev_nvme.c:5567:aer_cb: *WARNING*: AER request execute failed 00:11:24.339 [2024-11-17 04:17:09.708085] bdev_nvme.c:5567:aer_cb: *WARNING*: AER request execute failed 00:11:24.340 [2024-11-17 04:17:09.708101] bdev_nvme.c:5567:aer_cb: *WARNING*: AER request execute failed 00:11:24.340 [2024-11-17 04:17:09.708115] bdev_nvme.c:5567:aer_cb: *WARNING*: AER request execute failed 00:11:24.340 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.340 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.340 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:24.340 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.340 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.340 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.340 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.340 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.340 04:17:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.340 04:17:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.340 04:17:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.340 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:24.340 04:17:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:24.598 [2024-11-17 04:17:10.204203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:24.598 [2024-11-17 04:17:10.205317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.598 [2024-11-17 04:17:10.205350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.598 [2024-11-17 04:17:10.205360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.598 [2024-11-17 04:17:10.205370] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.598 [2024-11-17 04:17:10.205391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.598 [2024-11-17 04:17:10.205399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.598 [2024-11-17 04:17:10.205408] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.598 [2024-11-17 04:17:10.205416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.599 [2024-11-17 04:17:10.205422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.599 [2024-11-17 04:17:10.205429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.599 [2024-11-17 04:17:10.205435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.599 [2024-11-17 04:17:10.205443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.599 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:24.599 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.599 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.599 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.599 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.599 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.599 04:17:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.599 04:17:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.599 04:17:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.599 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:24.599 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:24.857 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.857 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.857 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:24.857 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:24.857 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.857 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.857 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.857 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:24.857 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:24.857 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.857 04:17:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.15 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.15 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.15 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.15 2 00:11:37.056 remove_attach_helper took 45.15s to complete (handling 2 nvme drive(s)) 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:37.056 04:17:22 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:37.056 04:17:22 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:43.658 04:17:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:43.658 04:17:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.658 04:17:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.658 04:17:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.658 04:17:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.658 04:17:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:43.658 04:17:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:43.658 04:17:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:43.658 04:17:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.658 04:17:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.658 04:17:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.658 04:17:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.658 04:17:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.658 04:17:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.658 04:17:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:43.658 04:17:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:43.658 [2024-11-17 04:17:28.683298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:43.658 [2024-11-17 04:17:28.684271] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.658 [2024-11-17 04:17:28.684372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.658 [2024-11-17 04:17:28.684397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.658 [2024-11-17 04:17:28.684409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.658 [2024-11-17 04:17:28.684420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.658 [2024-11-17 04:17:28.684426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.658 [2024-11-17 04:17:28.684434] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.658 [2024-11-17 04:17:28.684441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.658 [2024-11-17 04:17:28.684449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.658 [2024-11-17 04:17:28.684456] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.658 [2024-11-17 04:17:28.684463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.658 [2024-11-17 04:17:28.684469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.658 [2024-11-17 04:17:29.083309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:43.658 [2024-11-17 04:17:29.084321] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.659 [2024-11-17 04:17:29.084349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.659 [2024-11-17 04:17:29.084359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.659 [2024-11-17 04:17:29.084371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.659 [2024-11-17 04:17:29.084386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.659 [2024-11-17 04:17:29.084394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.659 [2024-11-17 04:17:29.084401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.659 [2024-11-17 04:17:29.084411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.659 [2024-11-17 04:17:29.084417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.659 [2024-11-17 04:17:29.084425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.659 [2024-11-17 04:17:29.084431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.659 [2024-11-17 04:17:29.084439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.659 04:17:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.659 04:17:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.659 04:17:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.659 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:43.917 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:43.917 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:43.917 04:17:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:56.114 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:56.114 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:56.114 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:56.114 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.114 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.115 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.115 04:17:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.115 04:17:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.115 04:17:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.115 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:56.115 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:56.115 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.115 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.115 [2024-11-17 04:17:41.483524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:56.115 [2024-11-17 04:17:41.484516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.115 [2024-11-17 04:17:41.484612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.115 [2024-11-17 04:17:41.484677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.115 [2024-11-17 04:17:41.484729] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.115 [2024-11-17 04:17:41.484749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.115 [2024-11-17 04:17:41.484773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.115 [2024-11-17 04:17:41.484828] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.115 [2024-11-17 04:17:41.484865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.115 [2024-11-17 04:17:41.484889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.115 [2024-11-17 04:17:41.484912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.115 [2024-11-17 04:17:41.484934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.115 [2024-11-17 04:17:41.484988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.115 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.115 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.115 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:56.115 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:56.115 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:56.115 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.115 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.115 04:17:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.115 04:17:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.115 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.115 04:17:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.115 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:56.115 04:17:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:56.373 [2024-11-17 04:17:41.883529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:56.373 [2024-11-17 04:17:41.884459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.373 [2024-11-17 04:17:41.884488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.373 [2024-11-17 04:17:41.884498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.373 [2024-11-17 04:17:41.884508] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.373 [2024-11-17 04:17:41.884515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.373 [2024-11-17 04:17:41.884523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.373 [2024-11-17 04:17:41.884530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.373 [2024-11-17 04:17:41.884538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.373 [2024-11-17 04:17:41.884544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.373 [2024-11-17 04:17:41.884552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.373 [2024-11-17 04:17:41.884558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.373 [2024-11-17 04:17:41.884566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.373 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:56.373 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:56.373 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:56.373 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.373 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.373 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.373 04:17:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.373 04:17:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.373 04:17:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.373 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:56.373 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:56.632 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.632 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.632 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:56.632 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:56.632 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.632 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.632 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.632 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:56.632 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:56.632 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.632 04:17:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.831 04:17:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.831 04:17:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.831 04:17:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.831 04:17:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.831 04:17:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.831 [2024-11-17 04:17:54.383868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:08.831 [2024-11-17 04:17:54.384636] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.831 [2024-11-17 04:17:54.384659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.831 [2024-11-17 04:17:54.384671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.831 [2024-11-17 04:17:54.384682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.831 [2024-11-17 04:17:54.384691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.831 [2024-11-17 04:17:54.384698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.831 [2024-11-17 04:17:54.384706] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.831 [2024-11-17 04:17:54.384713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.831 [2024-11-17 04:17:54.384721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.831 [2024-11-17 04:17:54.384727] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.831 [2024-11-17 04:17:54.384735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.831 [2024-11-17 04:17:54.384742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.831 04:17:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:08.831 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:09.093 [2024-11-17 04:17:54.783875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:09.093 [2024-11-17 04:17:54.784616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.093 [2024-11-17 04:17:54.784646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.093 [2024-11-17 04:17:54.784655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.093 [2024-11-17 04:17:54.784670] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.093 [2024-11-17 04:17:54.784677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.093 [2024-11-17 04:17:54.784685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.093 [2024-11-17 04:17:54.784692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.093 [2024-11-17 04:17:54.784700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.093 [2024-11-17 04:17:54.784706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.093 [2024-11-17 04:17:54.784713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.093 [2024-11-17 04:17:54.784720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.093 [2024-11-17 04:17:54.784728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.350 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:09.350 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:09.350 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:09.350 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.350 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.350 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.350 04:17:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.350 04:17:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.350 04:17:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.350 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:09.350 04:17:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:09.350 04:17:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:09.350 04:17:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:09.350 04:17:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:09.350 04:17:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:09.608 04:17:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:09.608 04:17:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:09.608 04:17:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:09.608 04:17:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:09.608 04:17:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:09.608 04:17:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:09.608 04:17:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:21.807 04:18:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:21.807 04:18:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:21.807 04:18:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:21.807 04:18:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:21.807 04:18:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.807 04:18:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.807 04:18:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:21.807 04:18:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.60 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.60 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:21.807 04:18:07 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.60 00:12:21.807 04:18:07 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.60 2 00:12:21.807 remove_attach_helper took 44.60s to complete (handling 2 nvme drive(s)) 04:18:07 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:21.807 04:18:07 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 78961 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 78961 ']' 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 78961 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78961 00:12:21.807 killing process with pid 78961 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78961' 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@973 -- # kill 78961 00:12:21.807 04:18:07 sw_hotplug -- common/autotest_common.sh@978 -- # wait 78961 00:12:21.807 04:18:07 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:22.067 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:22.638 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:22.638 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:22.638 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:22.638 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:22.898 00:12:22.898 real 2m28.390s 00:12:22.898 user 1m48.511s 00:12:22.898 sys 0m18.317s 00:12:22.898 ************************************ 00:12:22.898 END TEST sw_hotplug 00:12:22.898 ************************************ 00:12:22.898 04:18:08 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.898 04:18:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.898 04:18:08 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:22.898 04:18:08 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:22.898 04:18:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:22.898 04:18:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.898 04:18:08 -- common/autotest_common.sh@10 -- # set +x 00:12:22.898 ************************************ 00:12:22.898 START TEST nvme_xnvme 00:12:22.898 ************************************ 00:12:22.898 04:18:08 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:22.898 * Looking for test storage... 00:12:22.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:22.899 04:18:08 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:22.899 04:18:08 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:22.899 04:18:08 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:23.163 04:18:08 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:23.163 04:18:08 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.163 04:18:08 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:23.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.163 --rc genhtml_branch_coverage=1 00:12:23.163 --rc genhtml_function_coverage=1 00:12:23.163 --rc genhtml_legend=1 00:12:23.163 --rc geninfo_all_blocks=1 00:12:23.163 --rc geninfo_unexecuted_blocks=1 00:12:23.163 00:12:23.163 ' 00:12:23.163 04:18:08 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:23.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.163 --rc genhtml_branch_coverage=1 00:12:23.163 --rc genhtml_function_coverage=1 00:12:23.163 --rc genhtml_legend=1 00:12:23.163 --rc geninfo_all_blocks=1 00:12:23.163 --rc geninfo_unexecuted_blocks=1 00:12:23.163 00:12:23.163 ' 00:12:23.163 04:18:08 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:23.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.163 --rc genhtml_branch_coverage=1 00:12:23.163 --rc genhtml_function_coverage=1 00:12:23.163 --rc genhtml_legend=1 00:12:23.163 --rc geninfo_all_blocks=1 00:12:23.163 --rc geninfo_unexecuted_blocks=1 00:12:23.163 00:12:23.163 ' 00:12:23.163 04:18:08 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:23.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.163 --rc genhtml_branch_coverage=1 00:12:23.163 --rc genhtml_function_coverage=1 00:12:23.163 --rc genhtml_legend=1 00:12:23.163 --rc geninfo_all_blocks=1 00:12:23.163 --rc geninfo_unexecuted_blocks=1 00:12:23.163 00:12:23.163 ' 00:12:23.163 04:18:08 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.163 04:18:08 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.163 04:18:08 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.163 04:18:08 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.163 04:18:08 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.163 04:18:08 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:23.163 04:18:08 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.163 04:18:08 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:23.163 04:18:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.163 04:18:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.163 04:18:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.163 ************************************ 00:12:23.163 START TEST xnvme_to_malloc_dd_copy 00:12:23.163 ************************************ 00:12:23.163 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:12:23.163 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:23.163 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:23.163 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:23.163 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:23.163 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:23.163 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:23.163 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:23.163 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:23.163 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:23.163 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:23.163 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:23.164 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:23.164 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:23.164 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:23.164 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:23.164 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:23.164 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:23.164 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:23.164 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:23.164 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:23.164 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:23.164 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:23.164 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:23.164 04:18:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:23.164 { 00:12:23.164 "subsystems": [ 00:12:23.164 { 00:12:23.164 "subsystem": "bdev", 00:12:23.164 "config": [ 00:12:23.164 { 00:12:23.164 "params": { 00:12:23.164 "block_size": 512, 00:12:23.164 "num_blocks": 2097152, 00:12:23.164 "name": "malloc0" 00:12:23.164 }, 00:12:23.164 "method": "bdev_malloc_create" 00:12:23.164 }, 00:12:23.164 { 00:12:23.164 "params": { 00:12:23.164 "io_mechanism": "libaio", 00:12:23.164 "filename": "/dev/nullb0", 00:12:23.164 "name": "null0" 00:12:23.164 }, 00:12:23.164 "method": "bdev_xnvme_create" 00:12:23.164 }, 00:12:23.164 { 00:12:23.164 "method": "bdev_wait_for_examine" 00:12:23.164 } 00:12:23.164 ] 00:12:23.164 } 00:12:23.164 ] 00:12:23.164 } 00:12:23.164 [2024-11-17 04:18:08.761829] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:23.164 [2024-11-17 04:18:08.761964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80330 ] 00:12:23.425 [2024-11-17 04:18:08.925498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.425 [2024-11-17 04:18:08.953812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.809  [2024-11-17T04:18:11.475Z] Copying: 223/1024 [MB] (223 MBps) [2024-11-17T04:18:12.409Z] Copying: 448/1024 [MB] (225 MBps) [2024-11-17T04:18:13.344Z] Copying: 755/1024 [MB] (306 MBps) [2024-11-17T04:18:13.602Z] Copying: 1024/1024 [MB] (average 264 MBps) 00:12:27.875 00:12:27.875 04:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:27.875 04:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:27.875 04:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:27.875 04:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:27.875 { 00:12:27.875 "subsystems": [ 00:12:27.875 { 00:12:27.875 "subsystem": "bdev", 00:12:27.875 "config": [ 00:12:27.875 { 00:12:27.875 "params": { 00:12:27.875 "block_size": 512, 00:12:27.875 "num_blocks": 2097152, 00:12:27.875 "name": "malloc0" 00:12:27.875 }, 00:12:27.875 "method": "bdev_malloc_create" 00:12:27.875 }, 00:12:27.875 { 00:12:27.875 "params": { 00:12:27.875 "io_mechanism": "libaio", 00:12:27.875 "filename": "/dev/nullb0", 00:12:27.875 "name": "null0" 00:12:27.875 }, 00:12:27.875 "method": "bdev_xnvme_create" 00:12:27.875 }, 00:12:27.875 { 00:12:27.875 "method": "bdev_wait_for_examine" 00:12:27.875 } 00:12:27.875 ] 00:12:27.875 } 00:12:27.875 ] 00:12:27.875 } 00:12:27.875 [2024-11-17 04:18:13.546182] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:27.875 [2024-11-17 04:18:13.546307] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80385 ] 00:12:28.134 [2024-11-17 04:18:13.701123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.134 [2024-11-17 04:18:13.724123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.510  [2024-11-17T04:18:16.171Z] Copying: 309/1024 [MB] (309 MBps) [2024-11-17T04:18:17.106Z] Copying: 621/1024 [MB] (311 MBps) [2024-11-17T04:18:17.364Z] Copying: 932/1024 [MB] (311 MBps) [2024-11-17T04:18:17.624Z] Copying: 1024/1024 [MB] (average 310 MBps) 00:12:31.897 00:12:31.897 04:18:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:31.897 04:18:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:31.897 04:18:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:31.897 04:18:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:31.897 04:18:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:31.897 04:18:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:32.157 { 00:12:32.157 "subsystems": [ 00:12:32.157 { 00:12:32.157 "subsystem": "bdev", 00:12:32.157 "config": [ 00:12:32.157 { 00:12:32.157 "params": { 00:12:32.157 "block_size": 512, 00:12:32.157 "num_blocks": 2097152, 00:12:32.157 "name": "malloc0" 00:12:32.157 }, 00:12:32.157 "method": "bdev_malloc_create" 00:12:32.157 }, 00:12:32.157 { 00:12:32.157 "params": { 00:12:32.157 "io_mechanism": "io_uring", 00:12:32.157 "filename": "/dev/nullb0", 00:12:32.157 "name": "null0" 00:12:32.157 }, 00:12:32.157 "method": "bdev_xnvme_create" 00:12:32.157 }, 00:12:32.157 { 00:12:32.157 "method": "bdev_wait_for_examine" 00:12:32.157 } 00:12:32.157 ] 00:12:32.157 } 00:12:32.157 ] 00:12:32.157 } 00:12:32.157 [2024-11-17 04:18:17.651294] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:32.157 [2024-11-17 04:18:17.651443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80439 ] 00:12:32.157 [2024-11-17 04:18:17.804614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.157 [2024-11-17 04:18:17.828764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.531  [2024-11-17T04:18:20.192Z] Copying: 316/1024 [MB] (316 MBps) [2024-11-17T04:18:21.126Z] Copying: 632/1024 [MB] (316 MBps) [2024-11-17T04:18:21.385Z] Copying: 949/1024 [MB] (317 MBps) [2024-11-17T04:18:21.644Z] Copying: 1024/1024 [MB] (average 316 MBps) 00:12:35.917 00:12:35.917 04:18:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:35.917 04:18:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:35.917 04:18:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:35.917 04:18:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:35.917 { 00:12:35.917 "subsystems": [ 00:12:35.917 { 00:12:35.917 "subsystem": "bdev", 00:12:35.917 "config": [ 00:12:35.917 { 00:12:35.917 "params": { 00:12:35.917 "block_size": 512, 00:12:35.917 "num_blocks": 2097152, 00:12:35.917 "name": "malloc0" 00:12:35.917 }, 00:12:35.917 "method": "bdev_malloc_create" 00:12:35.917 }, 00:12:35.917 { 00:12:35.917 "params": { 00:12:35.917 "io_mechanism": "io_uring", 00:12:35.917 "filename": "/dev/nullb0", 00:12:35.917 "name": "null0" 00:12:35.917 }, 00:12:35.917 "method": "bdev_xnvme_create" 00:12:35.917 }, 00:12:35.917 { 00:12:35.917 "method": "bdev_wait_for_examine" 00:12:35.917 } 00:12:35.917 ] 00:12:35.917 } 00:12:35.917 ] 00:12:35.917 } 00:12:36.177 [2024-11-17 04:18:21.659500] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:36.177 [2024-11-17 04:18:21.659739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80493 ] 00:12:36.177 [2024-11-17 04:18:21.817461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.177 [2024-11-17 04:18:21.846353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.564  [2024-11-17T04:18:24.229Z] Copying: 233/1024 [MB] (233 MBps) [2024-11-17T04:18:25.334Z] Copying: 509/1024 [MB] (276 MBps) [2024-11-17T04:18:25.900Z] Copying: 832/1024 [MB] (322 MBps) [2024-11-17T04:18:26.159Z] Copying: 1024/1024 [MB] (average 284 MBps) 00:12:40.432 00:12:40.432 04:18:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:40.432 04:18:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:40.432 00:12:40.432 real 0m17.388s 00:12:40.432 user 0m14.254s 00:12:40.432 sys 0m2.645s 00:12:40.432 ************************************ 00:12:40.432 END TEST xnvme_to_malloc_dd_copy 00:12:40.432 ************************************ 00:12:40.432 04:18:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.432 04:18:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:40.432 04:18:26 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:40.432 04:18:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:40.432 04:18:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.432 04:18:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:40.432 ************************************ 00:12:40.432 START TEST xnvme_bdevperf 00:12:40.432 ************************************ 00:12:40.432 04:18:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:40.432 04:18:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:40.432 04:18:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:40.432 04:18:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:40.432 04:18:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:40.432 04:18:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:40.433 04:18:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:40.433 04:18:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:40.433 04:18:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:40.433 04:18:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:40.433 04:18:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:40.433 04:18:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:40.433 04:18:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:40.433 04:18:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:40.433 04:18:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:40.433 04:18:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:40.433 04:18:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:40.433 04:18:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:40.433 04:18:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:40.433 04:18:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:40.433 04:18:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:40.691 { 00:12:40.691 "subsystems": [ 00:12:40.691 { 00:12:40.691 "subsystem": "bdev", 00:12:40.691 "config": [ 00:12:40.691 { 00:12:40.691 "params": { 00:12:40.691 "io_mechanism": "libaio", 00:12:40.691 "filename": "/dev/nullb0", 00:12:40.691 "name": "null0" 00:12:40.691 }, 00:12:40.691 "method": "bdev_xnvme_create" 00:12:40.691 }, 00:12:40.691 { 00:12:40.691 "method": "bdev_wait_for_examine" 00:12:40.691 } 00:12:40.691 ] 00:12:40.691 } 00:12:40.691 ] 00:12:40.691 } 00:12:40.691 [2024-11-17 04:18:26.191808] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:40.691 [2024-11-17 04:18:26.191930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80576 ] 00:12:40.691 [2024-11-17 04:18:26.348488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.691 [2024-11-17 04:18:26.365389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.950 Running I/O for 5 seconds... 00:12:42.818 208448.00 IOPS, 814.25 MiB/s [2024-11-17T04:18:29.479Z] 208224.00 IOPS, 813.38 MiB/s [2024-11-17T04:18:30.854Z] 208234.67 IOPS, 813.42 MiB/s [2024-11-17T04:18:31.790Z] 208176.00 IOPS, 813.19 MiB/s [2024-11-17T04:18:31.790Z] 208089.60 IOPS, 812.85 MiB/s 00:12:46.063 Latency(us) 00:12:46.063 [2024-11-17T04:18:31.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.063 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:46.063 null0 : 5.00 208035.27 812.64 0.00 0.00 305.49 114.22 1550.18 00:12:46.063 [2024-11-17T04:18:31.790Z] =================================================================================================================== 00:12:46.063 [2024-11-17T04:18:31.790Z] Total : 208035.27 812.64 0.00 0.00 305.49 114.22 1550.18 00:12:46.063 04:18:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:46.063 04:18:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:46.063 04:18:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:46.063 04:18:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:46.063 04:18:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:46.063 04:18:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:46.063 { 00:12:46.063 "subsystems": [ 00:12:46.063 { 00:12:46.063 "subsystem": "bdev", 00:12:46.063 "config": [ 00:12:46.063 { 00:12:46.063 "params": { 00:12:46.063 "io_mechanism": "io_uring", 00:12:46.063 "filename": "/dev/nullb0", 00:12:46.063 "name": "null0" 00:12:46.063 }, 00:12:46.063 "method": "bdev_xnvme_create" 00:12:46.063 }, 00:12:46.063 { 00:12:46.063 "method": "bdev_wait_for_examine" 00:12:46.063 } 00:12:46.063 ] 00:12:46.063 } 00:12:46.063 ] 00:12:46.063 } 00:12:46.063 [2024-11-17 04:18:31.648007] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:46.063 [2024-11-17 04:18:31.648122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80641 ] 00:12:46.321 [2024-11-17 04:18:31.802859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.321 [2024-11-17 04:18:31.824041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.321 Running I/O for 5 seconds... 00:12:48.190 238400.00 IOPS, 931.25 MiB/s [2024-11-17T04:18:35.291Z] 238304.00 IOPS, 930.88 MiB/s [2024-11-17T04:18:36.224Z] 238058.67 IOPS, 929.92 MiB/s [2024-11-17T04:18:37.158Z] 238080.00 IOPS, 930.00 MiB/s 00:12:51.431 Latency(us) 00:12:51.431 [2024-11-17T04:18:37.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.431 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:51.431 null0 : 5.00 238056.15 929.91 0.00 0.00 266.43 146.51 2356.78 00:12:51.431 [2024-11-17T04:18:37.158Z] =================================================================================================================== 00:12:51.431 [2024-11-17T04:18:37.158Z] Total : 238056.15 929.91 0.00 0.00 266.43 146.51 2356.78 00:12:51.431 04:18:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:51.431 04:18:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:51.431 ************************************ 00:12:51.431 END TEST xnvme_bdevperf 00:12:51.431 ************************************ 00:12:51.431 00:12:51.431 real 0m10.939s 00:12:51.431 user 0m8.614s 00:12:51.431 sys 0m2.101s 00:12:51.431 04:18:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.431 04:18:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:51.431 ************************************ 00:12:51.431 END TEST nvme_xnvme 00:12:51.431 ************************************ 00:12:51.431 00:12:51.431 real 0m28.605s 00:12:51.431 user 0m22.999s 00:12:51.431 sys 0m4.859s 00:12:51.431 04:18:37 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.431 04:18:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.431 04:18:37 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:51.431 04:18:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:51.431 04:18:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.431 04:18:37 -- common/autotest_common.sh@10 -- # set +x 00:12:51.431 ************************************ 00:12:51.431 START TEST blockdev_xnvme 00:12:51.431 ************************************ 00:12:51.431 04:18:37 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:51.692 * Looking for test storage... 00:12:51.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:51.692 04:18:37 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:51.692 04:18:37 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:51.692 04:18:37 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:51.692 04:18:37 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.692 04:18:37 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:51.692 04:18:37 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.692 04:18:37 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:51.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.692 --rc genhtml_branch_coverage=1 00:12:51.692 --rc genhtml_function_coverage=1 00:12:51.692 --rc genhtml_legend=1 00:12:51.692 --rc geninfo_all_blocks=1 00:12:51.692 --rc geninfo_unexecuted_blocks=1 00:12:51.692 00:12:51.692 ' 00:12:51.692 04:18:37 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:51.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.692 --rc genhtml_branch_coverage=1 00:12:51.692 --rc genhtml_function_coverage=1 00:12:51.692 --rc genhtml_legend=1 00:12:51.692 --rc geninfo_all_blocks=1 00:12:51.692 --rc geninfo_unexecuted_blocks=1 00:12:51.692 00:12:51.692 ' 00:12:51.692 04:18:37 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:51.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.692 --rc genhtml_branch_coverage=1 00:12:51.692 --rc genhtml_function_coverage=1 00:12:51.692 --rc genhtml_legend=1 00:12:51.692 --rc geninfo_all_blocks=1 00:12:51.692 --rc geninfo_unexecuted_blocks=1 00:12:51.692 00:12:51.692 ' 00:12:51.692 04:18:37 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:51.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.692 --rc genhtml_branch_coverage=1 00:12:51.692 --rc genhtml_function_coverage=1 00:12:51.692 --rc genhtml_legend=1 00:12:51.692 --rc geninfo_all_blocks=1 00:12:51.692 --rc geninfo_unexecuted_blocks=1 00:12:51.692 00:12:51.692 ' 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80779 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 80779 00:12:51.692 04:18:37 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 80779 ']' 00:12:51.692 04:18:37 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.692 04:18:37 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.692 04:18:37 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:51.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.692 04:18:37 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.692 04:18:37 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.692 04:18:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.692 [2024-11-17 04:18:37.373678] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:51.692 [2024-11-17 04:18:37.373918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80779 ] 00:12:51.951 [2024-11-17 04:18:37.527219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.951 [2024-11-17 04:18:37.543879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.518 04:18:38 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.518 04:18:38 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:12:52.518 04:18:38 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:52.518 04:18:38 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:52.518 04:18:38 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:52.518 04:18:38 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:52.518 04:18:38 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:52.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:53.034 Waiting for block devices as requested 00:12:53.034 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.034 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.034 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.291 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:58.594 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:58.594 nvme0n1 00:12:58.594 nvme1n1 00:12:58.594 nvme2n1 00:12:58.594 nvme2n2 00:12:58.594 nvme2n3 00:12:58.594 nvme3n1 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.594 04:18:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:58.594 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:58.595 04:18:43 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3885ec6a-12f4-41b6-9552-ea5fd9145b11"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3885ec6a-12f4-41b6-9552-ea5fd9145b11",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "0f893704-5861-4eba-8f85-ae248e848561"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0f893704-5861-4eba-8f85-ae248e848561",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "42711706-46d6-43d3-ab98-e0f99d262281"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "42711706-46d6-43d3-ab98-e0f99d262281",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "2bd9ec86-2592-4880-9b40-fef1c67ccb68"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2bd9ec86-2592-4880-9b40-fef1c67ccb68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "654d2a4a-2428-4eee-9c60-34079218ca05"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "654d2a4a-2428-4eee-9c60-34079218ca05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cfdf769d-448e-4541-9bfc-ba68b0b706d8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cfdf769d-448e-4541-9bfc-ba68b0b706d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:58.595 04:18:44 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:58.595 04:18:44 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:58.595 04:18:44 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:58.595 04:18:44 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 80779 00:12:58.595 04:18:44 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 80779 ']' 00:12:58.595 04:18:44 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 80779 00:12:58.595 04:18:44 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:12:58.595 04:18:44 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.595 04:18:44 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80779 00:12:58.595 killing process with pid 80779 00:12:58.595 04:18:44 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.595 04:18:44 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.595 04:18:44 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80779' 00:12:58.595 04:18:44 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 80779 00:12:58.595 04:18:44 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 80779 00:12:58.595 04:18:44 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:58.595 04:18:44 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:58.595 04:18:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:58.595 04:18:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.595 04:18:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.595 ************************************ 00:12:58.595 START TEST bdev_hello_world 00:12:58.595 ************************************ 00:12:58.595 04:18:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:58.854 [2024-11-17 04:18:44.334188] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:58.854 [2024-11-17 04:18:44.334458] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81125 ] 00:12:58.854 [2024-11-17 04:18:44.489348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.854 [2024-11-17 04:18:44.509033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.114 [2024-11-17 04:18:44.666555] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:59.114 [2024-11-17 04:18:44.666593] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:59.114 [2024-11-17 04:18:44.666610] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:59.114 [2024-11-17 04:18:44.668127] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:59.114 [2024-11-17 04:18:44.668430] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:59.114 [2024-11-17 04:18:44.668451] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:59.114 [2024-11-17 04:18:44.668857] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:59.114 00:12:59.114 [2024-11-17 04:18:44.668903] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:59.114 00:12:59.114 real 0m0.509s 00:12:59.114 user 0m0.258s 00:12:59.114 sys 0m0.143s 00:12:59.114 04:18:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.114 ************************************ 00:12:59.114 END TEST bdev_hello_world 00:12:59.114 ************************************ 00:12:59.114 04:18:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:59.114 04:18:44 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:59.114 04:18:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:59.114 04:18:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.114 04:18:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:59.373 ************************************ 00:12:59.373 START TEST bdev_bounds 00:12:59.373 ************************************ 00:12:59.373 04:18:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:59.373 Process bdevio pid: 81149 00:12:59.373 04:18:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81149 00:12:59.373 04:18:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:59.373 04:18:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81149' 00:12:59.373 04:18:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81149 00:12:59.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.373 04:18:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 81149 ']' 00:12:59.373 04:18:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.373 04:18:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.373 04:18:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.373 04:18:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.373 04:18:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:59.373 04:18:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:59.373 [2024-11-17 04:18:44.900943] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:59.373 [2024-11-17 04:18:44.901067] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81149 ] 00:12:59.373 [2024-11-17 04:18:45.054710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:59.373 [2024-11-17 04:18:45.080004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.373 [2024-11-17 04:18:45.080545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.373 [2024-11-17 04:18:45.080618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.308 04:18:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.308 04:18:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:13:00.308 04:18:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:00.308 I/O targets: 00:13:00.308 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:00.308 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:00.308 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:00.308 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:00.308 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:00.308 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:00.308 00:13:00.308 00:13:00.308 CUnit - A unit testing framework for C - Version 2.1-3 00:13:00.308 http://cunit.sourceforge.net/ 00:13:00.308 00:13:00.308 00:13:00.308 Suite: bdevio tests on: nvme3n1 00:13:00.308 Test: blockdev write read block ...passed 00:13:00.308 Test: blockdev write zeroes read block ...passed 00:13:00.308 Test: blockdev write zeroes read no split ...passed 00:13:00.308 Test: blockdev write zeroes read split ...passed 00:13:00.308 Test: blockdev write zeroes read split partial ...passed 00:13:00.308 Test: blockdev reset ...passed 00:13:00.308 Test: blockdev write read 8 blocks ...passed 00:13:00.308 Test: blockdev write read size > 128k ...passed 00:13:00.308 Test: blockdev write read invalid size ...passed 00:13:00.308 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.308 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.308 Test: blockdev write read max offset ...passed 00:13:00.308 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.308 Test: blockdev writev readv 8 blocks ...passed 00:13:00.308 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.308 Test: blockdev writev readv block ...passed 00:13:00.308 Test: blockdev writev readv size > 128k ...passed 00:13:00.308 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.308 Test: blockdev comparev and writev ...passed 00:13:00.308 Test: blockdev nvme passthru rw ...passed 00:13:00.309 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.309 Test: blockdev nvme admin passthru ...passed 00:13:00.309 Test: blockdev copy ...passed 00:13:00.309 Suite: bdevio tests on: nvme2n3 00:13:00.309 Test: blockdev write read block ...passed 00:13:00.309 Test: blockdev write zeroes read block ...passed 00:13:00.309 Test: blockdev write zeroes read no split ...passed 00:13:00.309 Test: blockdev write zeroes read split ...passed 00:13:00.309 Test: blockdev write zeroes read split partial ...passed 00:13:00.309 Test: blockdev reset ...passed 00:13:00.309 Test: blockdev write read 8 blocks ...passed 00:13:00.309 Test: blockdev write read size > 128k ...passed 00:13:00.309 Test: blockdev write read invalid size ...passed 00:13:00.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.309 Test: blockdev write read max offset ...passed 00:13:00.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.309 Test: blockdev writev readv 8 blocks ...passed 00:13:00.309 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.309 Test: blockdev writev readv block ...passed 00:13:00.309 Test: blockdev writev readv size > 128k ...passed 00:13:00.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.309 Test: blockdev comparev and writev ...passed 00:13:00.309 Test: blockdev nvme passthru rw ...passed 00:13:00.309 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.309 Test: blockdev nvme admin passthru ...passed 00:13:00.309 Test: blockdev copy ...passed 00:13:00.309 Suite: bdevio tests on: nvme2n2 00:13:00.309 Test: blockdev write read block ...passed 00:13:00.309 Test: blockdev write zeroes read block ...passed 00:13:00.309 Test: blockdev write zeroes read no split ...passed 00:13:00.309 Test: blockdev write zeroes read split ...passed 00:13:00.309 Test: blockdev write zeroes read split partial ...passed 00:13:00.309 Test: blockdev reset ...passed 00:13:00.309 Test: blockdev write read 8 blocks ...passed 00:13:00.309 Test: blockdev write read size > 128k ...passed 00:13:00.309 Test: blockdev write read invalid size ...passed 00:13:00.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.309 Test: blockdev write read max offset ...passed 00:13:00.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.309 Test: blockdev writev readv 8 blocks ...passed 00:13:00.309 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.309 Test: blockdev writev readv block ...passed 00:13:00.309 Test: blockdev writev readv size > 128k ...passed 00:13:00.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.309 Test: blockdev comparev and writev ...passed 00:13:00.309 Test: blockdev nvme passthru rw ...passed 00:13:00.309 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.309 Test: blockdev nvme admin passthru ...passed 00:13:00.309 Test: blockdev copy ...passed 00:13:00.309 Suite: bdevio tests on: nvme2n1 00:13:00.309 Test: blockdev write read block ...passed 00:13:00.309 Test: blockdev write zeroes read block ...passed 00:13:00.309 Test: blockdev write zeroes read no split ...passed 00:13:00.309 Test: blockdev write zeroes read split ...passed 00:13:00.309 Test: blockdev write zeroes read split partial ...passed 00:13:00.309 Test: blockdev reset ...passed 00:13:00.309 Test: blockdev write read 8 blocks ...passed 00:13:00.309 Test: blockdev write read size > 128k ...passed 00:13:00.309 Test: blockdev write read invalid size ...passed 00:13:00.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.309 Test: blockdev write read max offset ...passed 00:13:00.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.309 Test: blockdev writev readv 8 blocks ...passed 00:13:00.309 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.309 Test: blockdev writev readv block ...passed 00:13:00.309 Test: blockdev writev readv size > 128k ...passed 00:13:00.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.309 Test: blockdev comparev and writev ...passed 00:13:00.309 Test: blockdev nvme passthru rw ...passed 00:13:00.309 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.309 Test: blockdev nvme admin passthru ...passed 00:13:00.309 Test: blockdev copy ...passed 00:13:00.309 Suite: bdevio tests on: nvme1n1 00:13:00.309 Test: blockdev write read block ...passed 00:13:00.309 Test: blockdev write zeroes read block ...passed 00:13:00.309 Test: blockdev write zeroes read no split ...passed 00:13:00.309 Test: blockdev write zeroes read split ...passed 00:13:00.309 Test: blockdev write zeroes read split partial ...passed 00:13:00.309 Test: blockdev reset ...passed 00:13:00.309 Test: blockdev write read 8 blocks ...passed 00:13:00.309 Test: blockdev write read size > 128k ...passed 00:13:00.309 Test: blockdev write read invalid size ...passed 00:13:00.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.309 Test: blockdev write read max offset ...passed 00:13:00.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.309 Test: blockdev writev readv 8 blocks ...passed 00:13:00.309 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.309 Test: blockdev writev readv block ...passed 00:13:00.309 Test: blockdev writev readv size > 128k ...passed 00:13:00.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.309 Test: blockdev comparev and writev ...passed 00:13:00.309 Test: blockdev nvme passthru rw ...passed 00:13:00.309 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.309 Test: blockdev nvme admin passthru ...passed 00:13:00.309 Test: blockdev copy ...passed 00:13:00.309 Suite: bdevio tests on: nvme0n1 00:13:00.309 Test: blockdev write read block ...passed 00:13:00.309 Test: blockdev write zeroes read block ...passed 00:13:00.309 Test: blockdev write zeroes read no split ...passed 00:13:00.309 Test: blockdev write zeroes read split ...passed 00:13:00.309 Test: blockdev write zeroes read split partial ...passed 00:13:00.309 Test: blockdev reset ...passed 00:13:00.309 Test: blockdev write read 8 blocks ...passed 00:13:00.309 Test: blockdev write read size > 128k ...passed 00:13:00.309 Test: blockdev write read invalid size ...passed 00:13:00.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.309 Test: blockdev write read max offset ...passed 00:13:00.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.309 Test: blockdev writev readv 8 blocks ...passed 00:13:00.309 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.309 Test: blockdev writev readv block ...passed 00:13:00.309 Test: blockdev writev readv size > 128k ...passed 00:13:00.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.309 Test: blockdev comparev and writev ...passed 00:13:00.309 Test: blockdev nvme passthru rw ...passed 00:13:00.309 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.309 Test: blockdev nvme admin passthru ...passed 00:13:00.309 Test: blockdev copy ...passed 00:13:00.309 00:13:00.309 Run Summary: Type Total Ran Passed Failed Inactive 00:13:00.309 suites 6 6 n/a 0 0 00:13:00.309 tests 138 138 138 0 0 00:13:00.309 asserts 780 780 780 0 n/a 00:13:00.309 00:13:00.309 Elapsed time = 0.228 seconds 00:13:00.309 0 00:13:00.309 04:18:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81149 00:13:00.309 04:18:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 81149 ']' 00:13:00.309 04:18:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 81149 00:13:00.309 04:18:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:13:00.309 04:18:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.309 04:18:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81149 00:13:00.310 04:18:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.310 04:18:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.310 04:18:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81149' 00:13:00.310 killing process with pid 81149 00:13:00.310 04:18:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 81149 00:13:00.310 04:18:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 81149 00:13:00.569 04:18:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:00.569 00:13:00.569 real 0m1.252s 00:13:00.569 user 0m3.230s 00:13:00.569 sys 0m0.242s 00:13:00.569 04:18:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.569 04:18:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:00.569 ************************************ 00:13:00.569 END TEST bdev_bounds 00:13:00.569 ************************************ 00:13:00.569 04:18:46 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:00.569 04:18:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:00.569 04:18:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.569 04:18:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.569 ************************************ 00:13:00.569 START TEST bdev_nbd 00:13:00.569 ************************************ 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81193 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81193 /var/tmp/spdk-nbd.sock 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 81193 ']' 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:00.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.569 04:18:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:00.569 [2024-11-17 04:18:46.224079] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:00.569 [2024-11-17 04:18:46.224310] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.828 [2024-11-17 04:18:46.378032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.828 [2024-11-17 04:18:46.394964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.394 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.394 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:01.394 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:01.394 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.394 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:01.394 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:01.394 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:01.394 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.394 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:01.394 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:01.394 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:01.394 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:01.394 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:01.394 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.394 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.653 1+0 records in 00:13:01.653 1+0 records out 00:13:01.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352416 s, 11.6 MB/s 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.653 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.911 1+0 records in 00:13:01.911 1+0 records out 00:13:01.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585161 s, 7.0 MB/s 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.911 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:02.169 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:02.169 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:02.169 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:02.169 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:02.169 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:02.169 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.169 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.169 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:02.169 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:02.169 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.169 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.169 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.169 1+0 records in 00:13:02.169 1+0 records out 00:13:02.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318316 s, 12.9 MB/s 00:13:02.169 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.169 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:02.170 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.170 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.170 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:02.170 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.170 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.170 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.429 1+0 records in 00:13:02.429 1+0 records out 00:13:02.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276181 s, 14.8 MB/s 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.429 04:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:02.429 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:02.429 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:02.429 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:02.429 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.687 1+0 records in 00:13:02.687 1+0 records out 00:13:02.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420043 s, 9.8 MB/s 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.687 1+0 records in 00:13:02.687 1+0 records out 00:13:02.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278835 s, 14.7 MB/s 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.687 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:02.946 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:02.946 { 00:13:02.946 "nbd_device": "/dev/nbd0", 00:13:02.946 "bdev_name": "nvme0n1" 00:13:02.946 }, 00:13:02.946 { 00:13:02.946 "nbd_device": "/dev/nbd1", 00:13:02.946 "bdev_name": "nvme1n1" 00:13:02.946 }, 00:13:02.946 { 00:13:02.946 "nbd_device": "/dev/nbd2", 00:13:02.946 "bdev_name": "nvme2n1" 00:13:02.946 }, 00:13:02.946 { 00:13:02.946 "nbd_device": "/dev/nbd3", 00:13:02.946 "bdev_name": "nvme2n2" 00:13:02.946 }, 00:13:02.946 { 00:13:02.946 "nbd_device": "/dev/nbd4", 00:13:02.946 "bdev_name": "nvme2n3" 00:13:02.946 }, 00:13:02.946 { 00:13:02.946 "nbd_device": "/dev/nbd5", 00:13:02.946 "bdev_name": "nvme3n1" 00:13:02.946 } 00:13:02.946 ]' 00:13:02.946 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:02.946 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:02.946 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:02.946 { 00:13:02.946 "nbd_device": "/dev/nbd0", 00:13:02.946 "bdev_name": "nvme0n1" 00:13:02.946 }, 00:13:02.946 { 00:13:02.946 "nbd_device": "/dev/nbd1", 00:13:02.946 "bdev_name": "nvme1n1" 00:13:02.946 }, 00:13:02.946 { 00:13:02.946 "nbd_device": "/dev/nbd2", 00:13:02.946 "bdev_name": "nvme2n1" 00:13:02.946 }, 00:13:02.946 { 00:13:02.946 "nbd_device": "/dev/nbd3", 00:13:02.946 "bdev_name": "nvme2n2" 00:13:02.946 }, 00:13:02.946 { 00:13:02.946 "nbd_device": "/dev/nbd4", 00:13:02.946 "bdev_name": "nvme2n3" 00:13:02.946 }, 00:13:02.946 { 00:13:02.946 "nbd_device": "/dev/nbd5", 00:13:02.946 "bdev_name": "nvme3n1" 00:13:02.946 } 00:13:02.946 ]' 00:13:02.946 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:02.946 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.946 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:02.946 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:02.946 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:02.946 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.946 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:03.205 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.205 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.205 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.205 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.205 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.205 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.205 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.205 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.205 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.205 04:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:03.463 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:03.463 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:03.463 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:03.463 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.463 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.463 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:03.463 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.463 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.463 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.463 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:03.721 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:03.721 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:03.721 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:03.721 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.721 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.721 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:03.721 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.721 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.721 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.721 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.980 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:04.238 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:04.238 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:04.238 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:04.238 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.238 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.238 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:04.238 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.238 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.238 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:04.238 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.238 04:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:04.496 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:04.757 /dev/nbd0 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.757 1+0 records in 00:13:04.757 1+0 records out 00:13:04.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101892 s, 4.0 MB/s 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:04.757 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:05.019 /dev/nbd1 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.019 1+0 records in 00:13:05.019 1+0 records out 00:13:05.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113348 s, 3.6 MB/s 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.019 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:05.280 /dev/nbd10 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.280 1+0 records in 00:13:05.280 1+0 records out 00:13:05.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127888 s, 3.2 MB/s 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.280 04:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:05.541 /dev/nbd11 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.541 1+0 records in 00:13:05.541 1+0 records out 00:13:05.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111646 s, 3.7 MB/s 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.541 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:05.803 /dev/nbd12 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.803 1+0 records in 00:13:05.803 1+0 records out 00:13:05.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134732 s, 3.0 MB/s 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.803 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:06.065 /dev/nbd13 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.065 1+0 records in 00:13:06.065 1+0 records out 00:13:06.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668268 s, 6.1 MB/s 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.065 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:06.327 { 00:13:06.327 "nbd_device": "/dev/nbd0", 00:13:06.327 "bdev_name": "nvme0n1" 00:13:06.327 }, 00:13:06.327 { 00:13:06.327 "nbd_device": "/dev/nbd1", 00:13:06.327 "bdev_name": "nvme1n1" 00:13:06.327 }, 00:13:06.327 { 00:13:06.327 "nbd_device": "/dev/nbd10", 00:13:06.327 "bdev_name": "nvme2n1" 00:13:06.327 }, 00:13:06.327 { 00:13:06.327 "nbd_device": "/dev/nbd11", 00:13:06.327 "bdev_name": "nvme2n2" 00:13:06.327 }, 00:13:06.327 { 00:13:06.327 "nbd_device": "/dev/nbd12", 00:13:06.327 "bdev_name": "nvme2n3" 00:13:06.327 }, 00:13:06.327 { 00:13:06.327 "nbd_device": "/dev/nbd13", 00:13:06.327 "bdev_name": "nvme3n1" 00:13:06.327 } 00:13:06.327 ]' 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:06.327 { 00:13:06.327 "nbd_device": "/dev/nbd0", 00:13:06.327 "bdev_name": "nvme0n1" 00:13:06.327 }, 00:13:06.327 { 00:13:06.327 "nbd_device": "/dev/nbd1", 00:13:06.327 "bdev_name": "nvme1n1" 00:13:06.327 }, 00:13:06.327 { 00:13:06.327 "nbd_device": "/dev/nbd10", 00:13:06.327 "bdev_name": "nvme2n1" 00:13:06.327 }, 00:13:06.327 { 00:13:06.327 "nbd_device": "/dev/nbd11", 00:13:06.327 "bdev_name": "nvme2n2" 00:13:06.327 }, 00:13:06.327 { 00:13:06.327 "nbd_device": "/dev/nbd12", 00:13:06.327 "bdev_name": "nvme2n3" 00:13:06.327 }, 00:13:06.327 { 00:13:06.327 "nbd_device": "/dev/nbd13", 00:13:06.327 "bdev_name": "nvme3n1" 00:13:06.327 } 00:13:06.327 ]' 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:06.327 /dev/nbd1 00:13:06.327 /dev/nbd10 00:13:06.327 /dev/nbd11 00:13:06.327 /dev/nbd12 00:13:06.327 /dev/nbd13' 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:06.327 /dev/nbd1 00:13:06.327 /dev/nbd10 00:13:06.327 /dev/nbd11 00:13:06.327 /dev/nbd12 00:13:06.327 /dev/nbd13' 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:06.327 256+0 records in 00:13:06.327 256+0 records out 00:13:06.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439016 s, 239 MB/s 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.327 04:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:06.589 256+0 records in 00:13:06.589 256+0 records out 00:13:06.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170835 s, 6.1 MB/s 00:13:06.589 04:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.589 04:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:06.851 256+0 records in 00:13:06.852 256+0 records out 00:13:06.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.212274 s, 4.9 MB/s 00:13:06.852 04:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.852 04:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:06.852 256+0 records in 00:13:06.852 256+0 records out 00:13:06.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173545 s, 6.0 MB/s 00:13:06.852 04:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.852 04:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:07.113 256+0 records in 00:13:07.113 256+0 records out 00:13:07.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171652 s, 6.1 MB/s 00:13:07.113 04:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.113 04:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:07.373 256+0 records in 00:13:07.373 256+0 records out 00:13:07.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173656 s, 6.0 MB/s 00:13:07.373 04:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.373 04:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:07.373 256+0 records in 00:13:07.373 256+0 records out 00:13:07.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145848 s, 7.2 MB/s 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.373 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:07.631 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:07.631 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:07.631 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:07.631 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.631 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.631 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:07.631 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.631 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.631 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.631 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:07.889 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:07.889 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:07.889 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:07.889 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.889 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.889 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:07.889 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.889 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.889 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.889 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:08.147 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.148 04:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:08.406 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:08.406 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:08.406 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:08.406 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.406 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.406 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:08.406 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.406 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.406 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.406 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:08.664 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:08.664 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:08.664 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:08.664 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.664 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.664 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:08.664 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.664 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.664 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:08.664 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.664 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:08.922 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:09.180 malloc_lvol_verify 00:13:09.180 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:09.181 6aa45edc-4f68-40d2-8d80-10d8cc253d4b 00:13:09.181 04:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:09.438 8c606d85-8811-4b79-bf01-d9aa4b117405 00:13:09.438 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:09.697 /dev/nbd0 00:13:09.697 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:09.697 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:09.697 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:09.697 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:09.697 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:09.697 mke2fs 1.47.0 (5-Feb-2023) 00:13:09.697 Discarding device blocks: 0/4096 done 00:13:09.697 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:09.697 00:13:09.697 Allocating group tables: 0/1 done 00:13:09.697 Writing inode tables: 0/1 done 00:13:09.697 Creating journal (1024 blocks): done 00:13:09.697 Writing superblocks and filesystem accounting information: 0/1 done 00:13:09.697 00:13:09.697 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:09.697 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.697 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:09.697 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.697 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:09.697 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.697 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81193 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 81193 ']' 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 81193 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81193 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.958 killing process with pid 81193 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81193' 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 81193 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 81193 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:09.958 00:13:09.958 real 0m9.510s 00:13:09.958 user 0m13.373s 00:13:09.958 sys 0m3.337s 00:13:09.958 ************************************ 00:13:09.958 END TEST bdev_nbd 00:13:09.958 ************************************ 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.958 04:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:10.219 04:18:55 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:10.219 04:18:55 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:10.219 04:18:55 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:10.219 04:18:55 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:10.219 04:18:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:10.219 04:18:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.219 04:18:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.219 ************************************ 00:13:10.219 START TEST bdev_fio 00:13:10.219 ************************************ 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:13:10.219 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:10.219 ************************************ 00:13:10.219 START TEST bdev_fio_rw_verify 00:13:10.219 ************************************ 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:10.219 04:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:10.478 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.478 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.478 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.479 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.479 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.479 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.479 fio-3.35 00:13:10.479 Starting 6 threads 00:13:22.702 00:13:22.702 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=81585: Sun Nov 17 04:19:06 2024 00:13:22.702 read: IOPS=32.6k, BW=127MiB/s (133MB/s)(1272MiB/10001msec) 00:13:22.702 slat (usec): min=2, max=1964, avg= 4.67, stdev= 8.77 00:13:22.702 clat (usec): min=80, max=7716, avg=562.95, stdev=402.42 00:13:22.702 lat (usec): min=86, max=7721, avg=567.62, stdev=402.98 00:13:22.702 clat percentiles (usec): 00:13:22.702 | 50.000th=[ 478], 99.000th=[ 2212], 99.900th=[ 3752], 99.990th=[ 5014], 00:13:22.702 | 99.999th=[ 5800] 00:13:22.702 write: IOPS=32.9k, BW=128MiB/s (135MB/s)(1285MiB/10001msec); 0 zone resets 00:13:22.702 slat (usec): min=3, max=3202, avg=22.92, stdev=61.05 00:13:22.702 clat (usec): min=73, max=6954, avg=686.41, stdev=488.40 00:13:22.702 lat (usec): min=86, max=6980, avg=709.32, stdev=497.14 00:13:22.702 clat percentiles (usec): 00:13:22.702 | 50.000th=[ 570], 99.000th=[ 2769], 99.900th=[ 4228], 99.990th=[ 5604], 00:13:22.702 | 99.999th=[ 6915] 00:13:22.702 bw ( KiB/s): min=57171, max=194712, per=99.53%, avg=130931.42, stdev=6669.00, samples=114 00:13:22.702 iops : min=14292, max=48678, avg=32732.26, stdev=1667.29, samples=114 00:13:22.702 lat (usec) : 100=0.07%, 250=10.64%, 500=35.72%, 750=29.25%, 1000=13.06% 00:13:22.702 lat (msec) : 2=9.06%, 4=2.10%, 10=0.10% 00:13:22.702 cpu : usr=50.10%, sys=31.02%, ctx=8908, majf=0, minf=27189 00:13:22.702 IO depths : 1=12.1%, 2=24.7%, 4=50.3%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:22.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.702 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.702 issued rwts: total=325747,328905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.702 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:22.702 00:13:22.702 Run status group 0 (all jobs): 00:13:22.702 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=1272MiB (1334MB), run=10001-10001msec 00:13:22.702 WRITE: bw=128MiB/s (135MB/s), 128MiB/s-128MiB/s (135MB/s-135MB/s), io=1285MiB (1347MB), run=10001-10001msec 00:13:22.702 ----------------------------------------------------- 00:13:22.702 Suppressions used: 00:13:22.702 count bytes template 00:13:22.702 6 48 /usr/src/fio/parse.c 00:13:22.702 2917 280032 /usr/src/fio/iolog.c 00:13:22.702 1 8 libtcmalloc_minimal.so 00:13:22.702 1 904 libcrypto.so 00:13:22.702 ----------------------------------------------------- 00:13:22.702 00:13:22.702 00:13:22.702 real 0m11.099s 00:13:22.702 user 0m30.742s 00:13:22.702 sys 0m18.905s 00:13:22.702 ************************************ 00:13:22.702 END TEST bdev_fio_rw_verify 00:13:22.702 ************************************ 00:13:22.702 04:19:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.702 04:19:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:22.702 04:19:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:22.702 04:19:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.702 04:19:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3885ec6a-12f4-41b6-9552-ea5fd9145b11"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3885ec6a-12f4-41b6-9552-ea5fd9145b11",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "0f893704-5861-4eba-8f85-ae248e848561"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0f893704-5861-4eba-8f85-ae248e848561",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "42711706-46d6-43d3-ab98-e0f99d262281"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "42711706-46d6-43d3-ab98-e0f99d262281",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "2bd9ec86-2592-4880-9b40-fef1c67ccb68"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2bd9ec86-2592-4880-9b40-fef1c67ccb68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "654d2a4a-2428-4eee-9c60-34079218ca05"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "654d2a4a-2428-4eee-9c60-34079218ca05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cfdf769d-448e-4541-9bfc-ba68b0b706d8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cfdf769d-448e-4541-9bfc-ba68b0b706d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.703 /home/vagrant/spdk_repo/spdk 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:22.703 00:13:22.703 real 0m11.261s 00:13:22.703 user 0m30.808s 00:13:22.703 sys 0m18.984s 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.703 ************************************ 00:13:22.703 END TEST bdev_fio 00:13:22.703 ************************************ 00:13:22.703 04:19:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:22.703 04:19:07 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:22.703 04:19:07 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:22.703 04:19:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:22.703 04:19:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.703 04:19:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.703 ************************************ 00:13:22.703 START TEST bdev_verify 00:13:22.703 ************************************ 00:13:22.703 04:19:07 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:22.703 [2024-11-17 04:19:07.086614] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:22.703 [2024-11-17 04:19:07.086724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81754 ] 00:13:22.703 [2024-11-17 04:19:07.239704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:22.703 [2024-11-17 04:19:07.269992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.703 [2024-11-17 04:19:07.270058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.703 Running I/O for 5 seconds... 00:13:24.347 24800.00 IOPS, 96.88 MiB/s [2024-11-17T04:19:11.010Z] 24112.00 IOPS, 94.19 MiB/s [2024-11-17T04:19:11.944Z] 23733.33 IOPS, 92.71 MiB/s [2024-11-17T04:19:12.880Z] 23632.25 IOPS, 92.31 MiB/s [2024-11-17T04:19:12.880Z] 23468.80 IOPS, 91.68 MiB/s 00:13:27.153 Latency(us) 00:13:27.153 [2024-11-17T04:19:12.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.153 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:27.153 Verification LBA range: start 0x0 length 0xa0000 00:13:27.153 nvme0n1 : 5.01 1685.47 6.58 0.00 0.00 75794.67 8771.74 69770.63 00:13:27.153 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:27.153 Verification LBA range: start 0xa0000 length 0xa0000 00:13:27.153 nvme0n1 : 5.05 1698.20 6.63 0.00 0.00 75234.43 10334.52 74610.22 00:13:27.153 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:27.153 Verification LBA range: start 0x0 length 0xbd0bd 00:13:27.153 nvme1n1 : 5.04 2861.82 11.18 0.00 0.00 44449.41 4461.49 59688.17 00:13:27.153 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:27.153 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:27.153 nvme1n1 : 5.05 2863.49 11.19 0.00 0.00 44429.41 5494.94 56865.08 00:13:27.153 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:27.153 Verification LBA range: start 0x0 length 0x80000 00:13:27.153 nvme2n1 : 5.06 1721.53 6.72 0.00 0.00 73767.32 11191.53 62914.56 00:13:27.153 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:27.153 Verification LBA range: start 0x80000 length 0x80000 00:13:27.153 nvme2n1 : 5.05 1825.59 7.13 0.00 0.00 69587.80 11040.30 62914.56 00:13:27.153 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:27.153 Verification LBA range: start 0x0 length 0x80000 00:13:27.153 nvme2n2 : 5.07 1717.96 6.71 0.00 0.00 73725.98 8922.98 60898.07 00:13:27.153 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:27.153 Verification LBA range: start 0x80000 length 0x80000 00:13:27.153 nvme2n2 : 5.05 1824.02 7.13 0.00 0.00 69506.04 8570.09 64527.75 00:13:27.153 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:27.153 Verification LBA range: start 0x0 length 0x80000 00:13:27.153 nvme2n3 : 5.07 1717.48 6.71 0.00 0.00 73579.94 7057.72 60091.47 00:13:27.153 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:27.153 Verification LBA range: start 0x80000 length 0x80000 00:13:27.153 nvme2n3 : 5.06 1821.39 7.11 0.00 0.00 69474.68 5116.85 65737.65 00:13:27.153 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:27.153 Verification LBA range: start 0x0 length 0x20000 00:13:27.153 nvme3n1 : 5.07 1740.35 6.80 0.00 0.00 72499.73 3428.04 64124.46 00:13:27.153 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:27.153 Verification LBA range: start 0x20000 length 0x20000 00:13:27.153 nvme3n1 : 5.06 1820.87 7.11 0.00 0.00 69376.38 5696.59 63721.16 00:13:27.153 [2024-11-17T04:19:12.880Z] =================================================================================================================== 00:13:27.153 [2024-11-17T04:19:12.880Z] Total : 23298.17 91.01 0.00 0.00 65368.68 3428.04 74610.22 00:13:27.153 00:13:27.153 real 0m5.714s 00:13:27.153 user 0m8.987s 00:13:27.153 sys 0m1.655s 00:13:27.153 04:19:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.153 04:19:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:27.153 ************************************ 00:13:27.153 END TEST bdev_verify 00:13:27.153 ************************************ 00:13:27.153 04:19:12 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:27.153 04:19:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:27.153 04:19:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.153 04:19:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.153 ************************************ 00:13:27.153 START TEST bdev_verify_big_io 00:13:27.153 ************************************ 00:13:27.153 04:19:12 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:27.153 [2024-11-17 04:19:12.846038] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:27.153 [2024-11-17 04:19:12.846148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81838 ] 00:13:27.412 [2024-11-17 04:19:13.011255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:27.412 [2024-11-17 04:19:13.030923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.412 [2024-11-17 04:19:13.030955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.670 Running I/O for 5 seconds... 00:13:32.346 984.00 IOPS, 61.50 MiB/s [2024-11-17T04:19:19.461Z] 2216.00 IOPS, 138.50 MiB/s [2024-11-17T04:19:19.461Z] 2597.33 IOPS, 162.33 MiB/s 00:13:33.734 Latency(us) 00:13:33.734 [2024-11-17T04:19:19.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.734 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:33.734 Verification LBA range: start 0x0 length 0xa000 00:13:33.734 nvme0n1 : 5.89 168.48 10.53 0.00 0.00 711409.95 87919.06 838860.80 00:13:33.734 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:33.734 Verification LBA range: start 0xa000 length 0xa000 00:13:33.734 nvme0n1 : 6.00 82.66 5.17 0.00 0.00 1499090.49 5368.91 1742249.35 00:13:33.734 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:33.734 Verification LBA range: start 0x0 length 0xbd0b 00:13:33.734 nvme1n1 : 5.89 162.95 10.18 0.00 0.00 719367.82 7914.73 1645457.72 00:13:33.734 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:33.734 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:33.734 nvme1n1 : 6.07 86.93 5.43 0.00 0.00 1392635.42 89935.56 1832588.21 00:13:33.734 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:33.734 Verification LBA range: start 0x0 length 0x8000 00:13:33.734 nvme2n1 : 5.92 162.05 10.13 0.00 0.00 729798.71 16535.24 1271196.75 00:13:33.734 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:33.734 Verification LBA range: start 0x8000 length 0x8000 00:13:33.734 nvme2n1 : 5.97 85.74 5.36 0.00 0.00 1363514.29 121796.14 1613193.85 00:13:33.734 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:33.734 Verification LBA range: start 0x0 length 0x8000 00:13:33.734 nvme2n2 : 5.91 161.04 10.06 0.00 0.00 718917.26 11443.59 1342177.28 00:13:33.734 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:33.734 Verification LBA range: start 0x8000 length 0x8000 00:13:33.734 nvme2n2 : 6.00 117.27 7.33 0.00 0.00 947044.90 44967.78 1400252.26 00:13:33.734 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:33.734 Verification LBA range: start 0x0 length 0x8000 00:13:33.735 nvme2n3 : 5.93 181.36 11.33 0.00 0.00 624793.78 12300.60 1187310.67 00:13:33.735 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:33.735 Verification LBA range: start 0x8000 length 0x8000 00:13:33.735 nvme2n3 : 6.01 94.07 5.88 0.00 0.00 1131802.70 43757.88 2013265.92 00:13:33.735 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:33.735 Verification LBA range: start 0x0 length 0x2000 00:13:33.735 nvme3n1 : 5.93 194.28 12.14 0.00 0.00 569859.68 12653.49 635598.38 00:13:33.735 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:33.735 Verification LBA range: start 0x2000 length 0x2000 00:13:33.735 nvme3n1 : 6.13 146.76 9.17 0.00 0.00 693973.32 1455.66 3974909.64 00:13:33.735 [2024-11-17T04:19:19.462Z] =================================================================================================================== 00:13:33.735 [2024-11-17T04:19:19.462Z] Total : 1643.58 102.72 0.00 0.00 838847.30 1455.66 3974909.64 00:13:33.996 00:13:33.996 real 0m6.906s 00:13:33.996 user 0m12.826s 00:13:33.996 sys 0m0.366s 00:13:33.996 04:19:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.996 ************************************ 00:13:33.996 END TEST bdev_verify_big_io 00:13:33.996 ************************************ 00:13:33.996 04:19:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.257 04:19:19 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:34.257 04:19:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:34.257 04:19:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.257 04:19:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:34.257 ************************************ 00:13:34.257 START TEST bdev_write_zeroes 00:13:34.257 ************************************ 00:13:34.257 04:19:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:34.257 [2024-11-17 04:19:19.826851] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:34.258 [2024-11-17 04:19:19.826988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81942 ] 00:13:34.519 [2024-11-17 04:19:19.989102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.519 [2024-11-17 04:19:20.030213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.780 Running I/O for 1 seconds... 00:13:35.727 71584.00 IOPS, 279.62 MiB/s 00:13:35.727 Latency(us) 00:13:35.727 [2024-11-17T04:19:21.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.727 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.727 nvme0n1 : 1.02 11559.54 45.15 0.00 0.00 11062.32 5343.70 24197.91 00:13:35.727 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.727 nvme1n1 : 1.03 13760.61 53.75 0.00 0.00 9281.94 4637.93 21979.77 00:13:35.727 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.727 nvme2n1 : 1.03 11469.17 44.80 0.00 0.00 11052.67 4562.31 20164.92 00:13:35.727 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.727 nvme2n2 : 1.03 11456.23 44.75 0.00 0.00 11053.31 4537.11 20467.40 00:13:35.727 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.727 nvme2n3 : 1.03 11443.44 44.70 0.00 0.00 11056.87 4612.73 22383.06 00:13:35.727 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.727 nvme3n1 : 1.03 11429.98 44.65 0.00 0.00 11061.51 4738.76 24298.73 00:13:35.727 [2024-11-17T04:19:21.454Z] =================================================================================================================== 00:13:35.727 [2024-11-17T04:19:21.454Z] Total : 71118.97 277.81 0.00 0.00 10714.09 4537.11 24298.73 00:13:35.987 00:13:35.987 real 0m1.857s 00:13:35.987 user 0m1.103s 00:13:35.987 sys 0m0.565s 00:13:35.987 ************************************ 00:13:35.987 END TEST bdev_write_zeroes 00:13:35.987 ************************************ 00:13:35.987 04:19:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.987 04:19:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:35.987 04:19:21 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:35.987 04:19:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:35.987 04:19:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.987 04:19:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.987 ************************************ 00:13:35.987 START TEST bdev_json_nonenclosed 00:13:35.987 ************************************ 00:13:35.987 04:19:21 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.248 [2024-11-17 04:19:21.758304] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:36.248 [2024-11-17 04:19:21.758466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81986 ] 00:13:36.248 [2024-11-17 04:19:21.919508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.248 [2024-11-17 04:19:21.957681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.248 [2024-11-17 04:19:21.957806] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:36.248 [2024-11-17 04:19:21.957826] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:36.248 [2024-11-17 04:19:21.957841] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:36.509 00:13:36.509 real 0m0.362s 00:13:36.509 user 0m0.139s 00:13:36.509 sys 0m0.118s 00:13:36.509 04:19:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.509 ************************************ 00:13:36.509 END TEST bdev_json_nonenclosed 00:13:36.509 ************************************ 00:13:36.509 04:19:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:36.509 04:19:22 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.509 04:19:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:36.509 04:19:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.509 04:19:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.509 ************************************ 00:13:36.509 START TEST bdev_json_nonarray 00:13:36.509 ************************************ 00:13:36.509 04:19:22 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.509 [2024-11-17 04:19:22.186096] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:36.509 [2024-11-17 04:19:22.186225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82012 ] 00:13:36.769 [2024-11-17 04:19:22.350597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.769 [2024-11-17 04:19:22.390186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.769 [2024-11-17 04:19:22.390328] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:36.769 [2024-11-17 04:19:22.390353] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:36.769 [2024-11-17 04:19:22.390369] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:36.769 00:13:36.769 real 0m0.367s 00:13:36.769 user 0m0.150s 00:13:36.769 sys 0m0.112s 00:13:36.769 ************************************ 00:13:36.769 END TEST bdev_json_nonarray 00:13:36.769 ************************************ 00:13:36.769 04:19:22 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.769 04:19:22 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:37.029 04:19:22 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:37.029 04:19:22 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:37.029 04:19:22 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:37.029 04:19:22 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:37.029 04:19:22 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:37.029 04:19:22 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:37.029 04:19:22 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:37.029 04:19:22 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:37.029 04:19:22 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:37.029 04:19:22 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:37.029 04:19:22 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:37.029 04:19:22 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:37.600 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:52.507 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.752 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.752 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.752 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.752 00:14:04.752 real 1m13.312s 00:14:04.752 user 1m19.076s 00:14:04.752 sys 1m29.947s 00:14:04.752 04:19:50 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.752 ************************************ 00:14:04.752 END TEST blockdev_xnvme 00:14:04.752 ************************************ 00:14:04.752 04:19:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.015 04:19:50 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:05.015 04:19:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:05.015 04:19:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.015 04:19:50 -- common/autotest_common.sh@10 -- # set +x 00:14:05.015 ************************************ 00:14:05.015 START TEST ublk 00:14:05.015 ************************************ 00:14:05.015 04:19:50 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:05.015 * Looking for test storage... 00:14:05.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:05.015 04:19:50 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:05.015 04:19:50 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:14:05.015 04:19:50 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:05.015 04:19:50 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:05.015 04:19:50 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.015 04:19:50 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.015 04:19:50 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.015 04:19:50 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.015 04:19:50 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.015 04:19:50 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.015 04:19:50 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.015 04:19:50 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.015 04:19:50 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.015 04:19:50 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.015 04:19:50 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.015 04:19:50 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:05.015 04:19:50 ublk -- scripts/common.sh@345 -- # : 1 00:14:05.015 04:19:50 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.015 04:19:50 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.015 04:19:50 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:05.015 04:19:50 ublk -- scripts/common.sh@353 -- # local d=1 00:14:05.015 04:19:50 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.015 04:19:50 ublk -- scripts/common.sh@355 -- # echo 1 00:14:05.015 04:19:50 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.015 04:19:50 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:05.015 04:19:50 ublk -- scripts/common.sh@353 -- # local d=2 00:14:05.015 04:19:50 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.015 04:19:50 ublk -- scripts/common.sh@355 -- # echo 2 00:14:05.015 04:19:50 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.015 04:19:50 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.015 04:19:50 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.015 04:19:50 ublk -- scripts/common.sh@368 -- # return 0 00:14:05.015 04:19:50 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.015 04:19:50 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.015 --rc genhtml_branch_coverage=1 00:14:05.015 --rc genhtml_function_coverage=1 00:14:05.015 --rc genhtml_legend=1 00:14:05.015 --rc geninfo_all_blocks=1 00:14:05.015 --rc geninfo_unexecuted_blocks=1 00:14:05.015 00:14:05.015 ' 00:14:05.015 04:19:50 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.015 --rc genhtml_branch_coverage=1 00:14:05.015 --rc genhtml_function_coverage=1 00:14:05.015 --rc genhtml_legend=1 00:14:05.015 --rc geninfo_all_blocks=1 00:14:05.015 --rc geninfo_unexecuted_blocks=1 00:14:05.015 00:14:05.015 ' 00:14:05.015 04:19:50 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.015 --rc genhtml_branch_coverage=1 00:14:05.015 --rc genhtml_function_coverage=1 00:14:05.015 --rc genhtml_legend=1 00:14:05.015 --rc geninfo_all_blocks=1 00:14:05.015 --rc geninfo_unexecuted_blocks=1 00:14:05.015 00:14:05.015 ' 00:14:05.015 04:19:50 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.015 --rc genhtml_branch_coverage=1 00:14:05.015 --rc genhtml_function_coverage=1 00:14:05.015 --rc genhtml_legend=1 00:14:05.015 --rc geninfo_all_blocks=1 00:14:05.015 --rc geninfo_unexecuted_blocks=1 00:14:05.015 00:14:05.015 ' 00:14:05.015 04:19:50 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:05.015 04:19:50 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:05.015 04:19:50 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:05.015 04:19:50 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:05.015 04:19:50 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:05.015 04:19:50 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:05.015 04:19:50 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:05.015 04:19:50 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:05.015 04:19:50 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:05.015 04:19:50 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:05.015 04:19:50 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:05.015 04:19:50 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:05.015 04:19:50 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:05.015 04:19:50 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:05.015 04:19:50 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:05.015 04:19:50 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:05.015 04:19:50 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:05.015 04:19:50 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:05.015 04:19:50 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:05.015 04:19:50 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:05.015 04:19:50 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:05.015 04:19:50 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.015 04:19:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.015 ************************************ 00:14:05.015 START TEST test_save_ublk_config 00:14:05.015 ************************************ 00:14:05.015 04:19:50 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:14:05.015 04:19:50 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:05.015 04:19:50 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:05.015 04:19:50 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82318 00:14:05.015 04:19:50 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:05.015 04:19:50 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82318 00:14:05.015 04:19:50 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 82318 ']' 00:14:05.015 04:19:50 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.015 04:19:50 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.015 04:19:50 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.015 04:19:50 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.015 04:19:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:05.277 [2024-11-17 04:19:50.806516] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:14:05.277 [2024-11-17 04:19:50.806663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82318 ] 00:14:05.277 [2024-11-17 04:19:50.969031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.537 [2024-11-17 04:19:51.009755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.109 04:19:51 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.109 04:19:51 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:14:06.109 04:19:51 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:06.109 04:19:51 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:06.109 04:19:51 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.109 04:19:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:06.109 [2024-11-17 04:19:51.672406] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:06.109 [2024-11-17 04:19:51.673527] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:06.109 malloc0 00:14:06.109 [2024-11-17 04:19:51.712549] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:06.109 [2024-11-17 04:19:51.712657] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:06.109 [2024-11-17 04:19:51.712668] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:06.109 [2024-11-17 04:19:51.712685] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:06.109 [2024-11-17 04:19:51.721535] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:06.109 [2024-11-17 04:19:51.721576] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:06.109 [2024-11-17 04:19:51.729432] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:06.109 [2024-11-17 04:19:51.729578] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:06.109 [2024-11-17 04:19:51.747423] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:06.109 0 00:14:06.109 04:19:51 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.109 04:19:51 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:06.109 04:19:51 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.109 04:19:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:06.372 04:19:52 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.372 04:19:52 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:06.372 "subsystems": [ 00:14:06.372 { 00:14:06.372 "subsystem": "fsdev", 00:14:06.372 "config": [ 00:14:06.372 { 00:14:06.372 "method": "fsdev_set_opts", 00:14:06.372 "params": { 00:14:06.372 "fsdev_io_pool_size": 65535, 00:14:06.372 "fsdev_io_cache_size": 256 00:14:06.372 } 00:14:06.372 } 00:14:06.372 ] 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "subsystem": "keyring", 00:14:06.372 "config": [] 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "subsystem": "iobuf", 00:14:06.372 "config": [ 00:14:06.372 { 00:14:06.372 "method": "iobuf_set_options", 00:14:06.372 "params": { 00:14:06.372 "small_pool_count": 8192, 00:14:06.372 "large_pool_count": 1024, 00:14:06.372 "small_bufsize": 8192, 00:14:06.372 "large_bufsize": 135168, 00:14:06.372 "enable_numa": false 00:14:06.372 } 00:14:06.372 } 00:14:06.372 ] 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "subsystem": "sock", 00:14:06.372 "config": [ 00:14:06.372 { 00:14:06.372 "method": "sock_set_default_impl", 00:14:06.372 "params": { 00:14:06.372 "impl_name": "posix" 00:14:06.372 } 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "method": "sock_impl_set_options", 00:14:06.372 "params": { 00:14:06.372 "impl_name": "ssl", 00:14:06.372 "recv_buf_size": 4096, 00:14:06.372 "send_buf_size": 4096, 00:14:06.372 "enable_recv_pipe": true, 00:14:06.372 "enable_quickack": false, 00:14:06.372 "enable_placement_id": 0, 00:14:06.372 "enable_zerocopy_send_server": true, 00:14:06.372 "enable_zerocopy_send_client": false, 00:14:06.372 "zerocopy_threshold": 0, 00:14:06.372 "tls_version": 0, 00:14:06.372 "enable_ktls": false 00:14:06.372 } 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "method": "sock_impl_set_options", 00:14:06.372 "params": { 00:14:06.372 "impl_name": "posix", 00:14:06.372 "recv_buf_size": 2097152, 00:14:06.372 "send_buf_size": 2097152, 00:14:06.372 "enable_recv_pipe": true, 00:14:06.372 "enable_quickack": false, 00:14:06.372 "enable_placement_id": 0, 00:14:06.372 "enable_zerocopy_send_server": true, 00:14:06.372 "enable_zerocopy_send_client": false, 00:14:06.372 "zerocopy_threshold": 0, 00:14:06.372 "tls_version": 0, 00:14:06.372 "enable_ktls": false 00:14:06.372 } 00:14:06.372 } 00:14:06.372 ] 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "subsystem": "vmd", 00:14:06.372 "config": [] 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "subsystem": "accel", 00:14:06.372 "config": [ 00:14:06.372 { 00:14:06.372 "method": "accel_set_options", 00:14:06.372 "params": { 00:14:06.372 "small_cache_size": 128, 00:14:06.372 "large_cache_size": 16, 00:14:06.372 "task_count": 2048, 00:14:06.372 "sequence_count": 2048, 00:14:06.372 "buf_count": 2048 00:14:06.372 } 00:14:06.372 } 00:14:06.372 ] 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "subsystem": "bdev", 00:14:06.372 "config": [ 00:14:06.372 { 00:14:06.372 "method": "bdev_set_options", 00:14:06.372 "params": { 00:14:06.372 "bdev_io_pool_size": 65535, 00:14:06.372 "bdev_io_cache_size": 256, 00:14:06.372 "bdev_auto_examine": true, 00:14:06.372 "iobuf_small_cache_size": 128, 00:14:06.372 "iobuf_large_cache_size": 16 00:14:06.372 } 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "method": "bdev_raid_set_options", 00:14:06.372 "params": { 00:14:06.372 "process_window_size_kb": 1024, 00:14:06.372 "process_max_bandwidth_mb_sec": 0 00:14:06.372 } 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "method": "bdev_iscsi_set_options", 00:14:06.372 "params": { 00:14:06.372 "timeout_sec": 30 00:14:06.372 } 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "method": "bdev_nvme_set_options", 00:14:06.372 "params": { 00:14:06.372 "action_on_timeout": "none", 00:14:06.372 "timeout_us": 0, 00:14:06.372 "timeout_admin_us": 0, 00:14:06.372 "keep_alive_timeout_ms": 10000, 00:14:06.372 "arbitration_burst": 0, 00:14:06.372 "low_priority_weight": 0, 00:14:06.372 "medium_priority_weight": 0, 00:14:06.372 "high_priority_weight": 0, 00:14:06.372 "nvme_adminq_poll_period_us": 10000, 00:14:06.372 "nvme_ioq_poll_period_us": 0, 00:14:06.372 "io_queue_requests": 0, 00:14:06.372 "delay_cmd_submit": true, 00:14:06.372 "transport_retry_count": 4, 00:14:06.372 "bdev_retry_count": 3, 00:14:06.372 "transport_ack_timeout": 0, 00:14:06.372 "ctrlr_loss_timeout_sec": 0, 00:14:06.372 "reconnect_delay_sec": 0, 00:14:06.372 "fast_io_fail_timeout_sec": 0, 00:14:06.372 "disable_auto_failback": false, 00:14:06.372 "generate_uuids": false, 00:14:06.372 "transport_tos": 0, 00:14:06.372 "nvme_error_stat": false, 00:14:06.372 "rdma_srq_size": 0, 00:14:06.372 "io_path_stat": false, 00:14:06.372 "allow_accel_sequence": false, 00:14:06.372 "rdma_max_cq_size": 0, 00:14:06.372 "rdma_cm_event_timeout_ms": 0, 00:14:06.372 "dhchap_digests": [ 00:14:06.372 "sha256", 00:14:06.372 "sha384", 00:14:06.372 "sha512" 00:14:06.372 ], 00:14:06.372 "dhchap_dhgroups": [ 00:14:06.372 "null", 00:14:06.372 "ffdhe2048", 00:14:06.372 "ffdhe3072", 00:14:06.372 "ffdhe4096", 00:14:06.372 "ffdhe6144", 00:14:06.372 "ffdhe8192" 00:14:06.372 ] 00:14:06.372 } 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "method": "bdev_nvme_set_hotplug", 00:14:06.372 "params": { 00:14:06.372 "period_us": 100000, 00:14:06.372 "enable": false 00:14:06.372 } 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "method": "bdev_malloc_create", 00:14:06.372 "params": { 00:14:06.372 "name": "malloc0", 00:14:06.372 "num_blocks": 8192, 00:14:06.372 "block_size": 4096, 00:14:06.372 "physical_block_size": 4096, 00:14:06.372 "uuid": "fc467daa-535f-48fa-8220-b9d82c4080a5", 00:14:06.372 "optimal_io_boundary": 0, 00:14:06.372 "md_size": 0, 00:14:06.372 "dif_type": 0, 00:14:06.372 "dif_is_head_of_md": false, 00:14:06.372 "dif_pi_format": 0 00:14:06.372 } 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "method": "bdev_wait_for_examine" 00:14:06.372 } 00:14:06.372 ] 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "subsystem": "scsi", 00:14:06.372 "config": null 00:14:06.372 }, 00:14:06.372 { 00:14:06.372 "subsystem": "scheduler", 00:14:06.372 "config": [ 00:14:06.372 { 00:14:06.372 "method": "framework_set_scheduler", 00:14:06.372 "params": { 00:14:06.372 "name": "static" 00:14:06.372 } 00:14:06.372 } 00:14:06.372 ] 00:14:06.372 }, 00:14:06.372 { 00:14:06.373 "subsystem": "vhost_scsi", 00:14:06.373 "config": [] 00:14:06.373 }, 00:14:06.373 { 00:14:06.373 "subsystem": "vhost_blk", 00:14:06.373 "config": [] 00:14:06.373 }, 00:14:06.373 { 00:14:06.373 "subsystem": "ublk", 00:14:06.373 "config": [ 00:14:06.373 { 00:14:06.373 "method": "ublk_create_target", 00:14:06.373 "params": { 00:14:06.373 "cpumask": "1" 00:14:06.373 } 00:14:06.373 }, 00:14:06.373 { 00:14:06.373 "method": "ublk_start_disk", 00:14:06.373 "params": { 00:14:06.373 "bdev_name": "malloc0", 00:14:06.373 "ublk_id": 0, 00:14:06.373 "num_queues": 1, 00:14:06.373 "queue_depth": 128 00:14:06.373 } 00:14:06.373 } 00:14:06.373 ] 00:14:06.373 }, 00:14:06.373 { 00:14:06.373 "subsystem": "nbd", 00:14:06.373 "config": [] 00:14:06.373 }, 00:14:06.373 { 00:14:06.373 "subsystem": "nvmf", 00:14:06.373 "config": [ 00:14:06.373 { 00:14:06.373 "method": "nvmf_set_config", 00:14:06.373 "params": { 00:14:06.373 "discovery_filter": "match_any", 00:14:06.373 "admin_cmd_passthru": { 00:14:06.373 "identify_ctrlr": false 00:14:06.373 }, 00:14:06.373 "dhchap_digests": [ 00:14:06.373 "sha256", 00:14:06.373 "sha384", 00:14:06.373 "sha512" 00:14:06.373 ], 00:14:06.373 "dhchap_dhgroups": [ 00:14:06.373 "null", 00:14:06.373 "ffdhe2048", 00:14:06.373 "ffdhe3072", 00:14:06.373 "ffdhe4096", 00:14:06.373 "ffdhe6144", 00:14:06.373 "ffdhe8192" 00:14:06.373 ] 00:14:06.373 } 00:14:06.373 }, 00:14:06.373 { 00:14:06.373 "method": "nvmf_set_max_subsystems", 00:14:06.373 "params": { 00:14:06.373 "max_subsystems": 1024 00:14:06.373 } 00:14:06.373 }, 00:14:06.373 { 00:14:06.373 "method": "nvmf_set_crdt", 00:14:06.373 "params": { 00:14:06.373 "crdt1": 0, 00:14:06.373 "crdt2": 0, 00:14:06.373 "crdt3": 0 00:14:06.373 } 00:14:06.373 } 00:14:06.373 ] 00:14:06.373 }, 00:14:06.373 { 00:14:06.373 "subsystem": "iscsi", 00:14:06.373 "config": [ 00:14:06.373 { 00:14:06.373 "method": "iscsi_set_options", 00:14:06.373 "params": { 00:14:06.373 "node_base": "iqn.2016-06.io.spdk", 00:14:06.373 "max_sessions": 128, 00:14:06.373 "max_connections_per_session": 2, 00:14:06.373 "max_queue_depth": 64, 00:14:06.373 "default_time2wait": 2, 00:14:06.373 "default_time2retain": 20, 00:14:06.373 "first_burst_length": 8192, 00:14:06.373 "immediate_data": true, 00:14:06.373 "allow_duplicated_isid": false, 00:14:06.373 "error_recovery_level": 0, 00:14:06.373 "nop_timeout": 60, 00:14:06.373 "nop_in_interval": 30, 00:14:06.373 "disable_chap": false, 00:14:06.373 "require_chap": false, 00:14:06.373 "mutual_chap": false, 00:14:06.373 "chap_group": 0, 00:14:06.373 "max_large_datain_per_connection": 64, 00:14:06.373 "max_r2t_per_connection": 4, 00:14:06.373 "pdu_pool_size": 36864, 00:14:06.373 "immediate_data_pool_size": 16384, 00:14:06.373 "data_out_pool_size": 2048 00:14:06.373 } 00:14:06.373 } 00:14:06.373 ] 00:14:06.373 } 00:14:06.373 ] 00:14:06.373 }' 00:14:06.373 04:19:52 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82318 00:14:06.373 04:19:52 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 82318 ']' 00:14:06.373 04:19:52 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 82318 00:14:06.373 04:19:52 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:14:06.373 04:19:52 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.373 04:19:52 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82318 00:14:06.373 04:19:52 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.373 killing process with pid 82318 00:14:06.373 04:19:52 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.373 04:19:52 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82318' 00:14:06.373 04:19:52 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 82318 00:14:06.373 04:19:52 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 82318 00:14:06.945 [2024-11-17 04:19:52.432284] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:06.945 [2024-11-17 04:19:52.478427] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:06.945 [2024-11-17 04:19:52.478596] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:06.945 [2024-11-17 04:19:52.484513] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:06.945 [2024-11-17 04:19:52.484590] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:06.945 [2024-11-17 04:19:52.484601] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:06.945 [2024-11-17 04:19:52.484664] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:06.945 [2024-11-17 04:19:52.485212] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:07.517 04:19:53 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82356 00:14:07.517 04:19:53 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82356 00:14:07.517 04:19:53 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 82356 ']' 00:14:07.518 04:19:53 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.518 04:19:53 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.518 04:19:53 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.518 04:19:53 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.518 04:19:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:07.518 04:19:53 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:07.518 "subsystems": [ 00:14:07.518 { 00:14:07.518 "subsystem": "fsdev", 00:14:07.518 "config": [ 00:14:07.518 { 00:14:07.518 "method": "fsdev_set_opts", 00:14:07.518 "params": { 00:14:07.518 "fsdev_io_pool_size": 65535, 00:14:07.518 "fsdev_io_cache_size": 256 00:14:07.518 } 00:14:07.518 } 00:14:07.518 ] 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "subsystem": "keyring", 00:14:07.518 "config": [] 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "subsystem": "iobuf", 00:14:07.518 "config": [ 00:14:07.518 { 00:14:07.518 "method": "iobuf_set_options", 00:14:07.518 "params": { 00:14:07.518 "small_pool_count": 8192, 00:14:07.518 "large_pool_count": 1024, 00:14:07.518 "small_bufsize": 8192, 00:14:07.518 "large_bufsize": 135168, 00:14:07.518 "enable_numa": false 00:14:07.518 } 00:14:07.518 } 00:14:07.518 ] 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "subsystem": "sock", 00:14:07.518 "config": [ 00:14:07.518 { 00:14:07.518 "method": "sock_set_default_impl", 00:14:07.518 "params": { 00:14:07.518 "impl_name": "posix" 00:14:07.518 } 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "method": "sock_impl_set_options", 00:14:07.518 "params": { 00:14:07.518 "impl_name": "ssl", 00:14:07.518 "recv_buf_size": 4096, 00:14:07.518 "send_buf_size": 4096, 00:14:07.518 "enable_recv_pipe": true, 00:14:07.518 "enable_quickack": false, 00:14:07.518 "enable_placement_id": 0, 00:14:07.518 "enable_zerocopy_send_server": true, 00:14:07.518 "enable_zerocopy_send_client": false, 00:14:07.518 "zerocopy_threshold": 0, 00:14:07.518 "tls_version": 0, 00:14:07.518 "enable_ktls": false 00:14:07.518 } 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "method": "sock_impl_set_options", 00:14:07.518 "params": { 00:14:07.518 "impl_name": "posix", 00:14:07.518 "recv_buf_size": 2097152, 00:14:07.518 "send_buf_size": 2097152, 00:14:07.518 "enable_recv_pipe": true, 00:14:07.518 "enable_quickack": false, 00:14:07.518 "enable_placement_id": 0, 00:14:07.518 "enable_zerocopy_send_server": true, 00:14:07.518 "enable_zerocopy_send_client": false, 00:14:07.518 "zerocopy_threshold": 0, 00:14:07.518 "tls_version": 0, 00:14:07.518 "enable_ktls": false 00:14:07.518 } 00:14:07.518 } 00:14:07.518 ] 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "subsystem": "vmd", 00:14:07.518 "config": [] 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "subsystem": "accel", 00:14:07.518 "config": [ 00:14:07.518 { 00:14:07.518 "method": "accel_set_options", 00:14:07.518 "params": { 00:14:07.518 "small_cache_size": 128, 00:14:07.518 "large_cache_size": 16, 00:14:07.518 "task_count": 2048, 00:14:07.518 "sequence_count": 2048, 00:14:07.518 "buf_count": 2048 00:14:07.518 } 00:14:07.518 } 00:14:07.518 ] 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "subsystem": "bdev", 00:14:07.518 "config": [ 00:14:07.518 { 00:14:07.518 "method": "bdev_set_options", 00:14:07.518 "params": { 00:14:07.518 "bdev_io_pool_size": 65535, 00:14:07.518 "bdev_io_cache_size": 256, 00:14:07.518 "bdev_auto_examine": true, 00:14:07.518 "iobuf_small_cache_size": 128, 00:14:07.518 "iobuf_large_cache_size": 16 00:14:07.518 } 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "method": "bdev_raid_set_options", 00:14:07.518 "params": { 00:14:07.518 "process_window_size_kb": 1024, 00:14:07.518 "process_max_bandwidth_mb_sec": 0 00:14:07.518 } 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "method": "bdev_iscsi_set_options", 00:14:07.518 "params": { 00:14:07.518 "timeout_sec": 30 00:14:07.518 } 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "method": "bdev_nvme_set_options", 00:14:07.518 "params": { 00:14:07.518 "action_on_timeout": "none", 00:14:07.518 "timeout_us": 0, 00:14:07.518 "timeout_admin_us": 0, 00:14:07.518 "keep_alive_timeout_ms": 10000, 00:14:07.518 "arbitration_burst": 0, 00:14:07.518 "low_priority_weight": 0, 00:14:07.518 "medium_priority_weight": 0, 00:14:07.518 "high_priority_weight": 0, 00:14:07.518 "nvme_adminq_poll_period_us": 10000, 00:14:07.518 "nvme_ioq_poll_period_us": 0, 00:14:07.518 "io_queue_requests": 0, 00:14:07.518 "delay_cmd_submit": true, 00:14:07.518 "transport_retry_count": 4, 00:14:07.518 "bdev_retry_count": 3, 00:14:07.518 "transport_ack_timeout": 0, 00:14:07.518 "ctrlr_loss_timeout_sec": 0, 00:14:07.518 "reconnect_delay_sec": 0, 00:14:07.518 "fast_io_fail_timeout_sec": 0, 00:14:07.518 "disable_auto_failback": false, 00:14:07.518 "generate_uuids": false, 00:14:07.518 "transport_tos": 0, 00:14:07.518 "nvme_error_stat": false, 00:14:07.518 "rdma_srq_size": 0, 00:14:07.518 "io_path_stat": false, 00:14:07.518 "allow_accel_sequence": false, 00:14:07.518 "rdma_max_cq_size": 0, 00:14:07.518 "rdma_cm_event_timeout_ms": 0, 00:14:07.518 "dhchap_digests": [ 00:14:07.518 "sha256", 00:14:07.518 "sha384", 00:14:07.518 "sha512" 00:14:07.518 ], 00:14:07.518 "dhchap_dhgroups": [ 00:14:07.518 "null", 00:14:07.518 "ffdhe2048", 00:14:07.518 "ffdhe3072", 00:14:07.518 "ffdhe4096", 00:14:07.518 "ffdhe6144", 00:14:07.518 "ffdhe8192" 00:14:07.518 ] 00:14:07.518 } 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "method": "bdev_nvme_set_hotplug", 00:14:07.518 "params": { 00:14:07.518 "period_us": 100000, 00:14:07.518 "enable": false 00:14:07.518 } 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "method": "bdev_malloc_create", 00:14:07.518 "params": { 00:14:07.518 "name": "malloc0", 00:14:07.518 "num_blocks": 8192, 00:14:07.518 "block_size": 4096, 00:14:07.518 "physical_block_size": 4096, 00:14:07.518 "uuid": "fc467daa-535f-48fa-8220-b9d82c4080a5", 00:14:07.518 "optimal_io_boundary": 0, 00:14:07.518 "md_size": 0, 00:14:07.518 "dif_type": 0, 00:14:07.518 "dif_is_head_of_md": false, 00:14:07.518 "dif_pi_format": 0 00:14:07.518 } 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "method": "bdev_wait_for_examine" 00:14:07.518 } 00:14:07.518 ] 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "subsystem": "scsi", 00:14:07.518 "config": null 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "subsystem": "scheduler", 00:14:07.518 "config": [ 00:14:07.518 { 00:14:07.518 "method": "framework_set_scheduler", 00:14:07.518 "params": { 00:14:07.518 "name": "static" 00:14:07.518 } 00:14:07.518 } 00:14:07.518 ] 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "subsystem": "vhost_scsi", 00:14:07.518 "config": [] 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "subsystem": "vhost_blk", 00:14:07.518 "config": [] 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "subsystem": "ublk", 00:14:07.518 "config": [ 00:14:07.518 { 00:14:07.518 "method": "ublk_create_target", 00:14:07.518 "params": { 00:14:07.518 "cpumask": "1" 00:14:07.518 } 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "method": "ublk_start_disk", 00:14:07.518 "params": { 00:14:07.518 "bdev_name": "malloc0", 00:14:07.518 "ublk_id": 0, 00:14:07.518 "num_queues": 1, 00:14:07.518 "queue_depth": 128 00:14:07.518 } 00:14:07.518 } 00:14:07.518 ] 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "subsystem": "nbd", 00:14:07.518 "config": [] 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "subsystem": "nvmf", 00:14:07.518 "config": [ 00:14:07.518 { 00:14:07.518 "method": "nvmf_set_config", 00:14:07.518 "params": { 00:14:07.518 "discovery_filter": "match_any", 00:14:07.518 "admin_cmd_passthru": { 00:14:07.518 "identify_ctrlr": false 00:14:07.518 }, 00:14:07.518 "dhchap_digests": [ 00:14:07.518 "sha256", 00:14:07.518 "sha384", 00:14:07.518 "sha512" 00:14:07.518 ], 00:14:07.518 "dhchap_dhgroups": [ 00:14:07.518 "null", 00:14:07.518 "ffdhe2048", 00:14:07.518 "ffdhe3072", 00:14:07.518 "ffdhe4096", 00:14:07.518 "ffdhe6144", 00:14:07.518 "ffdhe8192" 00:14:07.518 ] 00:14:07.518 } 00:14:07.518 }, 00:14:07.518 { 00:14:07.518 "method": "nvmf_set_max_subsystems", 00:14:07.519 "params": { 00:14:07.519 "max_subsystems": 1024 00:14:07.519 } 00:14:07.519 }, 00:14:07.519 { 00:14:07.519 "method": "nvmf_set_crdt", 00:14:07.519 "params": { 00:14:07.519 "crdt1": 0, 00:14:07.519 "crdt2": 0, 00:14:07.519 "crdt3": 0 00:14:07.519 } 00:14:07.519 } 00:14:07.519 ] 00:14:07.519 }, 00:14:07.519 { 00:14:07.519 "subsystem": "iscsi", 00:14:07.519 "config": [ 00:14:07.519 { 00:14:07.519 "method": "iscsi_set_options", 00:14:07.519 "params": { 00:14:07.519 "node_base": "iqn.2016-06.io.spdk", 00:14:07.519 "max_sessions": 128, 00:14:07.519 "max_connections_per_session": 2, 00:14:07.519 "max_queue_depth": 64, 00:14:07.519 "default_time2wait": 2, 00:14:07.519 "default_time2retain": 20, 00:14:07.519 "first_burst_length": 8192, 00:14:07.519 "immediate_data": true, 00:14:07.519 "allow_duplicated_isid": false, 00:14:07.519 "error_recovery_level": 0, 00:14:07.519 "nop_timeout": 60, 00:14:07.519 "nop_in_interval": 30, 00:14:07.519 "disable_chap": false, 00:14:07.519 "require_chap": false, 00:14:07.519 "mutual_chap": false, 00:14:07.519 "chap_group": 0, 00:14:07.519 "max_large_datain_per_connection": 64, 00:14:07.519 "max_r2t_per_connection": 4, 00:14:07.519 "pdu_pool_size": 36864, 00:14:07.519 "immediate_data_pool_size": 16384, 00:14:07.519 "data_out_pool_size": 2048 00:14:07.519 } 00:14:07.519 } 00:14:07.519 ] 00:14:07.519 } 00:14:07.519 ] 00:14:07.519 }' 00:14:07.519 04:19:53 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:07.519 [2024-11-17 04:19:53.163784] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:14:07.519 [2024-11-17 04:19:53.163933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82356 ] 00:14:07.780 [2024-11-17 04:19:53.327409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.780 [2024-11-17 04:19:53.367473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.353 [2024-11-17 04:19:53.844403] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:08.353 [2024-11-17 04:19:53.844844] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:08.353 [2024-11-17 04:19:53.852548] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:08.353 [2024-11-17 04:19:53.852653] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:08.353 [2024-11-17 04:19:53.852668] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:08.353 [2024-11-17 04:19:53.852682] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.353 [2024-11-17 04:19:53.861532] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.353 [2024-11-17 04:19:53.861572] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.354 [2024-11-17 04:19:53.863537] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.354 [2024-11-17 04:19:53.863660] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:08.354 [2024-11-17 04:19:53.873476] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.354 04:19:54 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.354 04:19:54 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:14:08.354 04:19:54 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:08.354 04:19:54 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:08.354 04:19:54 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.354 04:19:54 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:08.354 04:19:54 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.354 04:19:54 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:08.354 04:19:54 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:08.354 04:19:54 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82356 00:14:08.354 04:19:54 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 82356 ']' 00:14:08.354 04:19:54 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 82356 00:14:08.354 04:19:54 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:14:08.354 04:19:54 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.354 04:19:54 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82356 00:14:08.615 04:19:54 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.615 04:19:54 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.615 04:19:54 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82356' 00:14:08.615 killing process with pid 82356 00:14:08.615 04:19:54 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 82356 00:14:08.615 04:19:54 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 82356 00:14:08.875 [2024-11-17 04:19:54.487419] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:08.875 [2024-11-17 04:19:54.521538] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:08.875 [2024-11-17 04:19:54.521699] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:08.875 [2024-11-17 04:19:54.528677] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:08.875 [2024-11-17 04:19:54.528757] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:08.875 [2024-11-17 04:19:54.528768] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:08.875 [2024-11-17 04:19:54.528802] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:08.875 [2024-11-17 04:19:54.529504] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:09.447 04:19:55 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:09.447 00:14:09.447 real 0m4.397s 00:14:09.447 user 0m2.775s 00:14:09.447 sys 0m2.276s 00:14:09.447 04:19:55 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.447 ************************************ 00:14:09.447 END TEST test_save_ublk_config 00:14:09.447 ************************************ 00:14:09.447 04:19:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:09.708 04:19:55 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82412 00:14:09.708 04:19:55 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:09.708 04:19:55 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82412 00:14:09.708 04:19:55 ublk -- common/autotest_common.sh@835 -- # '[' -z 82412 ']' 00:14:09.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.708 04:19:55 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.708 04:19:55 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.708 04:19:55 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:09.708 04:19:55 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.708 04:19:55 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.708 04:19:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:09.708 [2024-11-17 04:19:55.274640] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:14:09.709 [2024-11-17 04:19:55.275073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82412 ] 00:14:09.969 [2024-11-17 04:19:55.440221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:09.969 [2024-11-17 04:19:55.482227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.970 [2024-11-17 04:19:55.482307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.542 04:19:56 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.542 04:19:56 ublk -- common/autotest_common.sh@868 -- # return 0 00:14:10.542 04:19:56 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:10.542 04:19:56 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:10.542 04:19:56 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.542 04:19:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.542 ************************************ 00:14:10.542 START TEST test_create_ublk 00:14:10.542 ************************************ 00:14:10.542 04:19:56 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:14:10.542 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:10.542 04:19:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.542 04:19:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.542 [2024-11-17 04:19:56.125405] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:10.542 [2024-11-17 04:19:56.127587] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:10.542 04:19:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.542 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:10.542 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:10.542 04:19:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.542 04:19:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.542 04:19:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.542 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:10.542 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:10.542 04:19:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.542 04:19:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.542 [2024-11-17 04:19:56.237599] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:10.543 [2024-11-17 04:19:56.238110] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:10.543 [2024-11-17 04:19:56.238130] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:10.543 [2024-11-17 04:19:56.238152] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:10.543 [2024-11-17 04:19:56.247321] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:10.543 [2024-11-17 04:19:56.247365] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:10.543 [2024-11-17 04:19:56.249551] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:10.543 [2024-11-17 04:19:56.251019] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:10.805 [2024-11-17 04:19:56.287431] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:10.805 04:19:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.805 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:10.805 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:10.805 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:10.805 04:19:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.805 04:19:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.805 04:19:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.805 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:10.805 { 00:14:10.805 "ublk_device": "/dev/ublkb0", 00:14:10.805 "id": 0, 00:14:10.805 "queue_depth": 512, 00:14:10.805 "num_queues": 4, 00:14:10.805 "bdev_name": "Malloc0" 00:14:10.805 } 00:14:10.805 ]' 00:14:10.805 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:10.805 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:10.805 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:10.805 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:10.805 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:10.805 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:10.805 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:10.805 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:10.805 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:10.805 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:10.805 04:19:56 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:10.805 04:19:56 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:10.805 04:19:56 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:10.805 04:19:56 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:10.805 04:19:56 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:10.805 04:19:56 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:10.805 04:19:56 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:10.805 04:19:56 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:10.805 04:19:56 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:10.805 04:19:56 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:10.805 04:19:56 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:10.805 04:19:56 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:11.067 fio: verification read phase will never start because write phase uses all of runtime 00:14:11.067 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:11.067 fio-3.35 00:14:11.067 Starting 1 process 00:14:21.049 00:14:21.049 fio_test: (groupid=0, jobs=1): err= 0: pid=82457: Sun Nov 17 04:20:06 2024 00:14:21.049 write: IOPS=16.3k, BW=63.9MiB/s (67.0MB/s)(639MiB/10001msec); 0 zone resets 00:14:21.049 clat (usec): min=31, max=3940, avg=60.48, stdev=82.41 00:14:21.049 lat (usec): min=31, max=3940, avg=60.88, stdev=82.43 00:14:21.049 clat percentiles (usec): 00:14:21.049 | 1.00th=[ 44], 5.00th=[ 49], 10.00th=[ 51], 20.00th=[ 53], 00:14:21.049 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:14:21.049 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 64], 95.00th=[ 69], 00:14:21.049 | 99.00th=[ 110], 99.50th=[ 123], 99.90th=[ 1450], 99.95th=[ 2442], 00:14:21.049 | 99.99th=[ 3458] 00:14:21.049 bw ( KiB/s): min=48352, max=66944, per=99.98%, avg=65385.68, stdev=4225.04, samples=19 00:14:21.049 iops : min=12088, max=16736, avg=16346.42, stdev=1056.26, samples=19 00:14:21.049 lat (usec) : 50=7.34%, 100=91.17%, 250=1.31%, 500=0.05%, 750=0.01% 00:14:21.049 lat (usec) : 1000=0.01% 00:14:21.049 lat (msec) : 2=0.05%, 4=0.07% 00:14:21.049 cpu : usr=1.63%, sys=8.13%, ctx=163545, majf=0, minf=796 00:14:21.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:21.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.049 issued rwts: total=0,163516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:21.049 00:14:21.049 Run status group 0 (all jobs): 00:14:21.049 WRITE: bw=63.9MiB/s (67.0MB/s), 63.9MiB/s-63.9MiB/s (67.0MB/s-67.0MB/s), io=639MiB (670MB), run=10001-10001msec 00:14:21.049 00:14:21.049 Disk stats (read/write): 00:14:21.049 ublkb0: ios=0/161728, merge=0/0, ticks=0/8954, in_queue=8955, util=98.91% 00:14:21.049 04:20:06 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:21.049 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.049 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.049 [2024-11-17 04:20:06.727256] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:21.049 [2024-11-17 04:20:06.768426] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:21.049 [2024-11-17 04:20:06.769213] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:21.307 [2024-11-17 04:20:06.779393] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:21.308 [2024-11-17 04:20:06.779681] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:21.308 [2024-11-17 04:20:06.779690] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.308 04:20:06 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.308 [2024-11-17 04:20:06.787484] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:21.308 request: 00:14:21.308 { 00:14:21.308 "ublk_id": 0, 00:14:21.308 "method": "ublk_stop_disk", 00:14:21.308 "req_id": 1 00:14:21.308 } 00:14:21.308 Got JSON-RPC error response 00:14:21.308 response: 00:14:21.308 { 00:14:21.308 "code": -19, 00:14:21.308 "message": "No such device" 00:14:21.308 } 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:21.308 04:20:06 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.308 [2024-11-17 04:20:06.803460] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:21.308 [2024-11-17 04:20:06.805316] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:21.308 [2024-11-17 04:20:06.805346] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.308 04:20:06 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.308 04:20:06 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:21.308 04:20:06 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.308 04:20:06 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:21.308 04:20:06 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:21.308 04:20:06 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:21.308 04:20:06 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.308 04:20:06 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:21.308 04:20:06 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:21.308 ************************************ 00:14:21.308 END TEST test_create_ublk 00:14:21.308 ************************************ 00:14:21.308 04:20:06 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:21.308 00:14:21.308 real 0m10.864s 00:14:21.308 user 0m0.469s 00:14:21.308 sys 0m0.894s 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.308 04:20:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.308 04:20:07 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:21.308 04:20:07 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:21.308 04:20:07 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.308 04:20:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.308 ************************************ 00:14:21.308 START TEST test_create_multi_ublk 00:14:21.308 ************************************ 00:14:21.308 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:14:21.308 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:21.308 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.308 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.308 [2024-11-17 04:20:07.030390] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:21.308 [2024-11-17 04:20:07.031493] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:21.308 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.308 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.566 [2024-11-17 04:20:07.126502] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:21.566 [2024-11-17 04:20:07.126807] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:21.566 [2024-11-17 04:20:07.126821] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:21.566 [2024-11-17 04:20:07.126827] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:21.566 [2024-11-17 04:20:07.138442] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:21.566 [2024-11-17 04:20:07.138460] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:21.566 [2024-11-17 04:20:07.150408] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:21.566 [2024-11-17 04:20:07.150927] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:21.566 [2024-11-17 04:20:07.160432] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.566 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.566 [2024-11-17 04:20:07.261498] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:21.566 [2024-11-17 04:20:07.261809] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:21.566 [2024-11-17 04:20:07.261820] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:21.566 [2024-11-17 04:20:07.261827] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:21.566 [2024-11-17 04:20:07.273429] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:21.566 [2024-11-17 04:20:07.273448] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:21.566 [2024-11-17 04:20:07.285394] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:21.566 [2024-11-17 04:20:07.285911] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:21.824 [2024-11-17 04:20:07.302402] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.824 [2024-11-17 04:20:07.409500] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:21.824 [2024-11-17 04:20:07.409814] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:21.824 [2024-11-17 04:20:07.409828] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:21.824 [2024-11-17 04:20:07.409833] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:21.824 [2024-11-17 04:20:07.421419] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:21.824 [2024-11-17 04:20:07.421436] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:21.824 [2024-11-17 04:20:07.433395] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:21.824 [2024-11-17 04:20:07.433914] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:21.824 [2024-11-17 04:20:07.458407] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.824 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.083 [2024-11-17 04:20:07.565486] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:22.083 [2024-11-17 04:20:07.565808] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:22.083 [2024-11-17 04:20:07.565820] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:22.083 [2024-11-17 04:20:07.565827] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:22.083 [2024-11-17 04:20:07.577411] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:22.083 [2024-11-17 04:20:07.577433] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:22.083 [2024-11-17 04:20:07.589394] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:22.083 [2024-11-17 04:20:07.589899] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:22.083 [2024-11-17 04:20:07.614408] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:22.083 { 00:14:22.083 "ublk_device": "/dev/ublkb0", 00:14:22.083 "id": 0, 00:14:22.083 "queue_depth": 512, 00:14:22.083 "num_queues": 4, 00:14:22.083 "bdev_name": "Malloc0" 00:14:22.083 }, 00:14:22.083 { 00:14:22.083 "ublk_device": "/dev/ublkb1", 00:14:22.083 "id": 1, 00:14:22.083 "queue_depth": 512, 00:14:22.083 "num_queues": 4, 00:14:22.083 "bdev_name": "Malloc1" 00:14:22.083 }, 00:14:22.083 { 00:14:22.083 "ublk_device": "/dev/ublkb2", 00:14:22.083 "id": 2, 00:14:22.083 "queue_depth": 512, 00:14:22.083 "num_queues": 4, 00:14:22.083 "bdev_name": "Malloc2" 00:14:22.083 }, 00:14:22.083 { 00:14:22.083 "ublk_device": "/dev/ublkb3", 00:14:22.083 "id": 3, 00:14:22.083 "queue_depth": 512, 00:14:22.083 "num_queues": 4, 00:14:22.083 "bdev_name": "Malloc3" 00:14:22.083 } 00:14:22.083 ]' 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.083 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:22.341 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:22.341 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:22.341 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:22.341 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:22.341 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:22.341 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:22.341 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:22.341 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:22.341 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:22.341 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.341 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:22.342 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:22.342 04:20:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:22.342 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:22.342 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:22.342 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:22.342 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:22.599 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.600 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.600 [2024-11-17 04:20:08.292472] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:22.858 [2024-11-17 04:20:08.330913] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:22.858 [2024-11-17 04:20:08.332042] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:22.858 [2024-11-17 04:20:08.340416] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:22.858 [2024-11-17 04:20:08.340654] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:22.858 [2024-11-17 04:20:08.340667] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.858 [2024-11-17 04:20:08.356457] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:22.858 [2024-11-17 04:20:08.389936] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:22.858 [2024-11-17 04:20:08.391012] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:22.858 [2024-11-17 04:20:08.396408] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:22.858 [2024-11-17 04:20:08.396648] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:22.858 [2024-11-17 04:20:08.396660] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.858 [2024-11-17 04:20:08.410472] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:22.858 [2024-11-17 04:20:08.444932] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:22.858 [2024-11-17 04:20:08.445947] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:22.858 [2024-11-17 04:20:08.460393] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:22.858 [2024-11-17 04:20:08.460628] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:22.858 [2024-11-17 04:20:08.460640] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.858 [2024-11-17 04:20:08.464539] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:22.858 [2024-11-17 04:20:08.495870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:22.858 [2024-11-17 04:20:08.496859] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:22.858 [2024-11-17 04:20:08.506400] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:22.858 [2024-11-17 04:20:08.506623] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:22.858 [2024-11-17 04:20:08.506634] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.858 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:23.116 [2024-11-17 04:20:08.698460] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:23.116 [2024-11-17 04:20:08.700005] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:23.116 [2024-11-17 04:20:08.700034] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:23.116 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:23.116 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.117 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:23.117 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.117 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.117 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.117 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.117 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:23.117 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.117 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.375 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.375 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.375 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:23.375 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.375 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.375 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.375 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.375 04:20:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:23.375 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.375 04:20:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.375 04:20:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.375 04:20:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:23.375 04:20:09 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:23.375 04:20:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.375 04:20:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.375 04:20:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.375 04:20:09 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:23.375 04:20:09 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:23.375 04:20:09 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:23.375 04:20:09 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:23.375 04:20:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.375 04:20:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.375 04:20:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.375 04:20:09 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:23.375 04:20:09 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:23.634 04:20:09 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:23.634 ************************************ 00:14:23.634 END TEST test_create_multi_ublk 00:14:23.634 ************************************ 00:14:23.634 00:14:23.634 real 0m2.089s 00:14:23.634 user 0m0.812s 00:14:23.634 sys 0m0.144s 00:14:23.634 04:20:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.634 04:20:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.634 04:20:09 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:23.634 04:20:09 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:23.634 04:20:09 ublk -- ublk/ublk.sh@130 -- # killprocess 82412 00:14:23.634 04:20:09 ublk -- common/autotest_common.sh@954 -- # '[' -z 82412 ']' 00:14:23.634 04:20:09 ublk -- common/autotest_common.sh@958 -- # kill -0 82412 00:14:23.634 04:20:09 ublk -- common/autotest_common.sh@959 -- # uname 00:14:23.634 04:20:09 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.634 04:20:09 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82412 00:14:23.634 killing process with pid 82412 00:14:23.634 04:20:09 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:23.634 04:20:09 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:23.634 04:20:09 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82412' 00:14:23.634 04:20:09 ublk -- common/autotest_common.sh@973 -- # kill 82412 00:14:23.634 04:20:09 ublk -- common/autotest_common.sh@978 -- # wait 82412 00:14:23.892 [2024-11-17 04:20:09.380677] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:23.892 [2024-11-17 04:20:09.380734] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:24.153 00:14:24.153 real 0m19.078s 00:14:24.153 user 0m28.292s 00:14:24.153 sys 0m8.006s 00:14:24.153 ************************************ 00:14:24.153 END TEST ublk 00:14:24.153 ************************************ 00:14:24.153 04:20:09 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.153 04:20:09 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:24.153 04:20:09 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:24.153 04:20:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:24.153 04:20:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.153 04:20:09 -- common/autotest_common.sh@10 -- # set +x 00:14:24.153 ************************************ 00:14:24.153 START TEST ublk_recovery 00:14:24.153 ************************************ 00:14:24.153 04:20:09 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:24.153 * Looking for test storage... 00:14:24.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:24.153 04:20:09 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:24.153 04:20:09 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:24.153 04:20:09 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:24.153 04:20:09 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:24.153 04:20:09 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:24.154 04:20:09 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:24.154 04:20:09 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:24.154 04:20:09 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:24.154 04:20:09 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:24.154 04:20:09 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:24.154 04:20:09 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:24.154 04:20:09 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:24.154 04:20:09 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:24.154 04:20:09 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:24.154 04:20:09 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:24.154 04:20:09 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:24.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.154 --rc genhtml_branch_coverage=1 00:14:24.154 --rc genhtml_function_coverage=1 00:14:24.154 --rc genhtml_legend=1 00:14:24.154 --rc geninfo_all_blocks=1 00:14:24.154 --rc geninfo_unexecuted_blocks=1 00:14:24.154 00:14:24.154 ' 00:14:24.154 04:20:09 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:24.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.154 --rc genhtml_branch_coverage=1 00:14:24.154 --rc genhtml_function_coverage=1 00:14:24.154 --rc genhtml_legend=1 00:14:24.154 --rc geninfo_all_blocks=1 00:14:24.154 --rc geninfo_unexecuted_blocks=1 00:14:24.154 00:14:24.154 ' 00:14:24.154 04:20:09 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:24.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.154 --rc genhtml_branch_coverage=1 00:14:24.154 --rc genhtml_function_coverage=1 00:14:24.154 --rc genhtml_legend=1 00:14:24.154 --rc geninfo_all_blocks=1 00:14:24.154 --rc geninfo_unexecuted_blocks=1 00:14:24.154 00:14:24.154 ' 00:14:24.154 04:20:09 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:24.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.154 --rc genhtml_branch_coverage=1 00:14:24.154 --rc genhtml_function_coverage=1 00:14:24.154 --rc genhtml_legend=1 00:14:24.154 --rc geninfo_all_blocks=1 00:14:24.154 --rc geninfo_unexecuted_blocks=1 00:14:24.154 00:14:24.154 ' 00:14:24.154 04:20:09 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:24.154 04:20:09 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:24.154 04:20:09 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:24.154 04:20:09 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:24.154 04:20:09 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:24.154 04:20:09 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:24.154 04:20:09 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:24.154 04:20:09 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:24.154 04:20:09 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:24.154 04:20:09 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:24.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.154 04:20:09 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=82781 00:14:24.154 04:20:09 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:24.154 04:20:09 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 82781 00:14:24.154 04:20:09 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 82781 ']' 00:14:24.154 04:20:09 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:24.154 04:20:09 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.154 04:20:09 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:24.154 04:20:09 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.154 04:20:09 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:24.154 04:20:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.154 [2024-11-17 04:20:09.877014] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:14:24.154 [2024-11-17 04:20:09.877610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82781 ] 00:14:24.413 [2024-11-17 04:20:10.031544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:24.413 [2024-11-17 04:20:10.056990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.413 [2024-11-17 04:20:10.057038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.348 04:20:10 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.348 04:20:10 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:25.348 04:20:10 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:25.348 04:20:10 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.348 04:20:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.348 [2024-11-17 04:20:10.717394] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:25.348 [2024-11-17 04:20:10.718623] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:25.348 04:20:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.348 04:20:10 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:25.348 04:20:10 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.348 04:20:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.348 malloc0 00:14:25.348 04:20:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.348 04:20:10 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:25.348 04:20:10 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.348 04:20:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.348 [2024-11-17 04:20:10.757502] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:25.348 [2024-11-17 04:20:10.757605] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:25.348 [2024-11-17 04:20:10.757617] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:25.348 [2024-11-17 04:20:10.757626] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:25.348 [2024-11-17 04:20:10.766493] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:25.348 [2024-11-17 04:20:10.766516] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:25.348 [2024-11-17 04:20:10.773405] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:25.348 [2024-11-17 04:20:10.773525] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:25.348 [2024-11-17 04:20:10.788398] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:25.348 1 00:14:25.348 04:20:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.348 04:20:10 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:26.282 04:20:11 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=82808 00:14:26.282 04:20:11 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:26.282 04:20:11 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:26.282 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:26.282 fio-3.35 00:14:26.282 Starting 1 process 00:14:31.548 04:20:16 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 82781 00:14:31.548 04:20:16 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:36.831 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 82781 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:36.831 04:20:21 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=82920 00:14:36.831 04:20:21 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:36.831 04:20:21 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:36.831 04:20:21 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 82920 00:14:36.831 04:20:21 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 82920 ']' 00:14:36.831 04:20:21 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.831 04:20:21 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.831 04:20:21 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.831 04:20:21 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.831 04:20:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.831 [2024-11-17 04:20:21.887505] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:14:36.831 [2024-11-17 04:20:21.887834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82920 ] 00:14:36.831 [2024-11-17 04:20:22.045779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:36.831 [2024-11-17 04:20:22.067887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.831 [2024-11-17 04:20:22.067929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.092 04:20:22 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.092 04:20:22 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:37.092 04:20:22 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:37.092 04:20:22 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.092 04:20:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.092 [2024-11-17 04:20:22.701402] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:37.092 [2024-11-17 04:20:22.703068] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:37.092 04:20:22 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.092 04:20:22 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:37.092 04:20:22 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.092 04:20:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.092 malloc0 00:14:37.092 04:20:22 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.092 04:20:22 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:37.092 04:20:22 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.092 04:20:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.092 [2024-11-17 04:20:22.759557] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:37.092 [2024-11-17 04:20:22.759613] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:37.092 [2024-11-17 04:20:22.759622] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:37.092 [2024-11-17 04:20:22.766454] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:37.093 [2024-11-17 04:20:22.766482] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:37.093 1 00:14:37.093 04:20:22 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.093 04:20:22 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 82808 00:14:38.471 [2024-11-17 04:20:23.766527] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:38.471 [2024-11-17 04:20:23.773402] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:38.471 [2024-11-17 04:20:23.773427] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:39.406 [2024-11-17 04:20:24.778399] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:39.406 [2024-11-17 04:20:24.784402] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:39.406 [2024-11-17 04:20:24.784421] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:40.340 [2024-11-17 04:20:25.784446] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:40.340 [2024-11-17 04:20:25.794402] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:40.340 [2024-11-17 04:20:25.794422] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:40.340 [2024-11-17 04:20:25.794430] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:40.340 [2024-11-17 04:20:25.794511] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:02.361 [2024-11-17 04:20:47.153407] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:02.361 [2024-11-17 04:20:47.160036] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:02.361 [2024-11-17 04:20:47.167587] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:02.361 [2024-11-17 04:20:47.167613] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:28.899 00:15:28.899 fio_test: (groupid=0, jobs=1): err= 0: pid=82817: Sun Nov 17 04:21:12 2024 00:15:28.899 read: IOPS=14.0k, BW=54.6MiB/s (57.3MB/s)(3279MiB/60002msec) 00:15:28.899 slat (nsec): min=1308, max=261401, avg=5508.68, stdev=1547.26 00:15:28.899 clat (usec): min=1166, max=30373k, avg=4200.42, stdev=245822.77 00:15:28.899 lat (usec): min=1177, max=30373k, avg=4205.93, stdev=245822.77 00:15:28.899 clat percentiles (usec): 00:15:28.899 | 1.00th=[ 1860], 5.00th=[ 1975], 10.00th=[ 1991], 20.00th=[ 2024], 00:15:28.899 | 30.00th=[ 2040], 40.00th=[ 2057], 50.00th=[ 2073], 60.00th=[ 2089], 00:15:28.899 | 70.00th=[ 2114], 80.00th=[ 2114], 90.00th=[ 2180], 95.00th=[ 3195], 00:15:28.899 | 99.00th=[ 5276], 99.50th=[ 5735], 99.90th=[ 7308], 99.95th=[ 8848], 00:15:28.899 | 99.99th=[13042] 00:15:28.900 bw ( KiB/s): min=38064, max=117864, per=100.00%, avg=112005.56, stdev=14337.91, samples=59 00:15:28.900 iops : min= 9516, max=29466, avg=28001.39, stdev=3584.48, samples=59 00:15:28.900 write: IOPS=14.0k, BW=54.6MiB/s (57.2MB/s)(3275MiB/60002msec); 0 zone resets 00:15:28.900 slat (nsec): min=1455, max=343163, avg=5744.62, stdev=1640.24 00:15:28.900 clat (usec): min=1183, max=30374k, avg=4942.70, stdev=283377.94 00:15:28.900 lat (usec): min=1189, max=30374k, avg=4948.45, stdev=283377.94 00:15:28.900 clat percentiles (usec): 00:15:28.900 | 1.00th=[ 1909], 5.00th=[ 2073], 10.00th=[ 2114], 20.00th=[ 2114], 00:15:28.900 | 30.00th=[ 2147], 40.00th=[ 2147], 50.00th=[ 2180], 60.00th=[ 2180], 00:15:28.900 | 70.00th=[ 2212], 80.00th=[ 2212], 90.00th=[ 2278], 95.00th=[ 3130], 00:15:28.900 | 99.00th=[ 5342], 99.50th=[ 5866], 99.90th=[ 7504], 99.95th=[ 8848], 00:15:28.900 | 99.99th=[13173] 00:15:28.900 bw ( KiB/s): min=37936, max=117616, per=100.00%, avg=111855.32, stdev=14264.57, samples=59 00:15:28.900 iops : min= 9484, max=29404, avg=27963.83, stdev=3566.14, samples=59 00:15:28.900 lat (msec) : 2=6.20%, 4=90.67%, 10=3.09%, 20=0.03%, >=2000=0.01% 00:15:28.900 cpu : usr=3.02%, sys=16.04%, ctx=54577, majf=0, minf=13 00:15:28.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:28.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:28.900 issued rwts: total=839427,838394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:28.900 00:15:28.900 Run status group 0 (all jobs): 00:15:28.900 READ: bw=54.6MiB/s (57.3MB/s), 54.6MiB/s-54.6MiB/s (57.3MB/s-57.3MB/s), io=3279MiB (3438MB), run=60002-60002msec 00:15:28.900 WRITE: bw=54.6MiB/s (57.2MB/s), 54.6MiB/s-54.6MiB/s (57.2MB/s-57.2MB/s), io=3275MiB (3434MB), run=60002-60002msec 00:15:28.900 00:15:28.900 Disk stats (read/write): 00:15:28.900 ublkb1: ios=836246/835228, merge=0/0, ticks=3474258/4019411, in_queue=7493670, util=99.89% 00:15:28.900 04:21:12 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.900 [2024-11-17 04:21:12.052071] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:28.900 [2024-11-17 04:21:12.090421] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:28.900 [2024-11-17 04:21:12.090574] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:28.900 [2024-11-17 04:21:12.098422] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:28.900 [2024-11-17 04:21:12.098517] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:28.900 [2024-11-17 04:21:12.098529] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.900 04:21:12 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.900 [2024-11-17 04:21:12.114480] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:28.900 [2024-11-17 04:21:12.115639] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:28.900 [2024-11-17 04:21:12.115669] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.900 04:21:12 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:28.900 04:21:12 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:28.900 04:21:12 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 82920 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 82920 ']' 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 82920 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82920 00:15:28.900 killing process with pid 82920 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82920' 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@973 -- # kill 82920 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@978 -- # wait 82920 00:15:28.900 [2024-11-17 04:21:12.377811] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:28.900 [2024-11-17 04:21:12.377869] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:28.900 ************************************ 00:15:28.900 END TEST ublk_recovery 00:15:28.900 ************************************ 00:15:28.900 00:15:28.900 real 1m3.073s 00:15:28.900 user 1m43.850s 00:15:28.900 sys 0m23.142s 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.900 04:21:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.900 04:21:12 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:15:28.900 04:21:12 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:15:28.900 04:21:12 -- spdk/autotest.sh@260 -- # timing_exit lib 00:15:28.900 04:21:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:28.900 04:21:12 -- common/autotest_common.sh@10 -- # set +x 00:15:28.900 04:21:12 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:15:28.900 04:21:12 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:15:28.900 04:21:12 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:15:28.900 04:21:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:28.900 04:21:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:28.900 04:21:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:15:28.900 04:21:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:15:28.900 04:21:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:15:28.900 04:21:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:15:28.900 04:21:12 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:15:28.900 04:21:12 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:28.900 04:21:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:28.900 04:21:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.900 04:21:12 -- common/autotest_common.sh@10 -- # set +x 00:15:28.900 ************************************ 00:15:28.900 START TEST ftl 00:15:28.900 ************************************ 00:15:28.900 04:21:12 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:28.900 * Looking for test storage... 00:15:28.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:28.900 04:21:12 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:28.900 04:21:12 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:28.900 04:21:12 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:15:28.900 04:21:12 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:28.900 04:21:12 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.900 04:21:12 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.900 04:21:12 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.900 04:21:12 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.900 04:21:12 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.900 04:21:12 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.900 04:21:12 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.900 04:21:12 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.900 04:21:12 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.900 04:21:12 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.900 04:21:12 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.900 04:21:12 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:28.900 04:21:12 ftl -- scripts/common.sh@345 -- # : 1 00:15:28.900 04:21:12 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.900 04:21:12 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.900 04:21:12 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:28.900 04:21:12 ftl -- scripts/common.sh@353 -- # local d=1 00:15:28.900 04:21:12 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.900 04:21:12 ftl -- scripts/common.sh@355 -- # echo 1 00:15:28.900 04:21:12 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.900 04:21:12 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:28.900 04:21:12 ftl -- scripts/common.sh@353 -- # local d=2 00:15:28.900 04:21:12 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.900 04:21:12 ftl -- scripts/common.sh@355 -- # echo 2 00:15:28.900 04:21:12 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.900 04:21:12 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.900 04:21:12 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.900 04:21:12 ftl -- scripts/common.sh@368 -- # return 0 00:15:28.900 04:21:12 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.900 04:21:12 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:28.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.900 --rc genhtml_branch_coverage=1 00:15:28.900 --rc genhtml_function_coverage=1 00:15:28.900 --rc genhtml_legend=1 00:15:28.900 --rc geninfo_all_blocks=1 00:15:28.900 --rc geninfo_unexecuted_blocks=1 00:15:28.900 00:15:28.900 ' 00:15:28.900 04:21:12 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:28.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.900 --rc genhtml_branch_coverage=1 00:15:28.900 --rc genhtml_function_coverage=1 00:15:28.900 --rc genhtml_legend=1 00:15:28.900 --rc geninfo_all_blocks=1 00:15:28.900 --rc geninfo_unexecuted_blocks=1 00:15:28.900 00:15:28.900 ' 00:15:28.900 04:21:12 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:28.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.900 --rc genhtml_branch_coverage=1 00:15:28.900 --rc genhtml_function_coverage=1 00:15:28.900 --rc genhtml_legend=1 00:15:28.900 --rc geninfo_all_blocks=1 00:15:28.900 --rc geninfo_unexecuted_blocks=1 00:15:28.900 00:15:28.900 ' 00:15:28.900 04:21:12 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:28.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.901 --rc genhtml_branch_coverage=1 00:15:28.901 --rc genhtml_function_coverage=1 00:15:28.901 --rc genhtml_legend=1 00:15:28.901 --rc geninfo_all_blocks=1 00:15:28.901 --rc geninfo_unexecuted_blocks=1 00:15:28.901 00:15:28.901 ' 00:15:28.901 04:21:12 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:28.901 04:21:12 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:28.901 04:21:12 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:28.901 04:21:12 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:28.901 04:21:12 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:28.901 04:21:12 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:28.901 04:21:12 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.901 04:21:12 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:28.901 04:21:12 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:28.901 04:21:12 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.901 04:21:12 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.901 04:21:12 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:28.901 04:21:12 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:28.901 04:21:12 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:28.901 04:21:12 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:28.901 04:21:12 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:28.901 04:21:12 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:28.901 04:21:12 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.901 04:21:12 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.901 04:21:12 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:28.901 04:21:12 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:28.901 04:21:12 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:28.901 04:21:12 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:28.901 04:21:12 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:28.901 04:21:12 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:28.901 04:21:12 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:28.901 04:21:12 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:28.901 04:21:12 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:28.901 04:21:12 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:28.901 04:21:12 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.901 04:21:12 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:28.901 04:21:12 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:28.901 04:21:12 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:28.901 04:21:12 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:28.901 04:21:12 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:28.901 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:28.901 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:28.901 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:28.901 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:28.901 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:28.901 04:21:13 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=83719 00:15:28.901 04:21:13 ftl -- ftl/ftl.sh@38 -- # waitforlisten 83719 00:15:28.901 04:21:13 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:28.901 04:21:13 ftl -- common/autotest_common.sh@835 -- # '[' -z 83719 ']' 00:15:28.901 04:21:13 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.901 04:21:13 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.901 04:21:13 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.901 04:21:13 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.901 04:21:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:28.901 [2024-11-17 04:21:13.524556] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:15:28.901 [2024-11-17 04:21:13.524693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83719 ] 00:15:28.901 [2024-11-17 04:21:13.682404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.901 [2024-11-17 04:21:13.705190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.901 04:21:14 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.901 04:21:14 ftl -- common/autotest_common.sh@868 -- # return 0 00:15:28.901 04:21:14 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:28.901 04:21:14 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:29.473 04:21:14 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:29.473 04:21:14 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:29.734 04:21:15 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:29.734 04:21:15 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:29.734 04:21:15 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:29.995 04:21:15 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:29.995 04:21:15 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:29.995 04:21:15 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:29.995 04:21:15 ftl -- ftl/ftl.sh@50 -- # break 00:15:29.995 04:21:15 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:29.995 04:21:15 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:29.995 04:21:15 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:29.995 04:21:15 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:30.256 04:21:15 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:30.256 04:21:15 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:30.256 04:21:15 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:30.256 04:21:15 ftl -- ftl/ftl.sh@63 -- # break 00:15:30.256 04:21:15 ftl -- ftl/ftl.sh@66 -- # killprocess 83719 00:15:30.256 04:21:15 ftl -- common/autotest_common.sh@954 -- # '[' -z 83719 ']' 00:15:30.256 04:21:15 ftl -- common/autotest_common.sh@958 -- # kill -0 83719 00:15:30.256 04:21:15 ftl -- common/autotest_common.sh@959 -- # uname 00:15:30.256 04:21:15 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.256 04:21:15 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83719 00:15:30.256 04:21:15 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.256 killing process with pid 83719 00:15:30.256 04:21:15 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.256 04:21:15 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83719' 00:15:30.256 04:21:15 ftl -- common/autotest_common.sh@973 -- # kill 83719 00:15:30.256 04:21:15 ftl -- common/autotest_common.sh@978 -- # wait 83719 00:15:30.517 04:21:16 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:30.517 04:21:16 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:30.517 04:21:16 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:30.517 04:21:16 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.517 04:21:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:30.517 ************************************ 00:15:30.517 START TEST ftl_fio_basic 00:15:30.518 ************************************ 00:15:30.518 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:30.518 * Looking for test storage... 00:15:30.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:30.518 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:30.518 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:15:30.518 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:30.779 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:30.779 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:30.779 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:30.779 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:30.779 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.779 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:30.779 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:30.779 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:30.779 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:30.779 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:30.779 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:30.779 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:30.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.780 --rc genhtml_branch_coverage=1 00:15:30.780 --rc genhtml_function_coverage=1 00:15:30.780 --rc genhtml_legend=1 00:15:30.780 --rc geninfo_all_blocks=1 00:15:30.780 --rc geninfo_unexecuted_blocks=1 00:15:30.780 00:15:30.780 ' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:30.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.780 --rc genhtml_branch_coverage=1 00:15:30.780 --rc genhtml_function_coverage=1 00:15:30.780 --rc genhtml_legend=1 00:15:30.780 --rc geninfo_all_blocks=1 00:15:30.780 --rc geninfo_unexecuted_blocks=1 00:15:30.780 00:15:30.780 ' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:30.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.780 --rc genhtml_branch_coverage=1 00:15:30.780 --rc genhtml_function_coverage=1 00:15:30.780 --rc genhtml_legend=1 00:15:30.780 --rc geninfo_all_blocks=1 00:15:30.780 --rc geninfo_unexecuted_blocks=1 00:15:30.780 00:15:30.780 ' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:30.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.780 --rc genhtml_branch_coverage=1 00:15:30.780 --rc genhtml_function_coverage=1 00:15:30.780 --rc genhtml_legend=1 00:15:30.780 --rc geninfo_all_blocks=1 00:15:30.780 --rc geninfo_unexecuted_blocks=1 00:15:30.780 00:15:30.780 ' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=83840 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 83840 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 83840 ']' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.780 04:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:30.780 [2024-11-17 04:21:16.365841] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:15:30.780 [2024-11-17 04:21:16.365968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83840 ] 00:15:31.040 [2024-11-17 04:21:16.523406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:31.040 [2024-11-17 04:21:16.554984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.040 [2024-11-17 04:21:16.555178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.040 [2024-11-17 04:21:16.555237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.607 04:21:17 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.607 04:21:17 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:15:31.607 04:21:17 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:31.607 04:21:17 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:31.607 04:21:17 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:31.607 04:21:17 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:31.607 04:21:17 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:31.607 04:21:17 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:31.866 04:21:17 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:31.866 04:21:17 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:31.866 04:21:17 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:31.866 04:21:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:15:31.866 04:21:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:31.866 04:21:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:31.866 04:21:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:31.866 04:21:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:32.124 04:21:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:32.124 { 00:15:32.124 "name": "nvme0n1", 00:15:32.124 "aliases": [ 00:15:32.124 "5017284f-f42b-4f7f-b2f2-95d11e36489b" 00:15:32.124 ], 00:15:32.124 "product_name": "NVMe disk", 00:15:32.124 "block_size": 4096, 00:15:32.124 "num_blocks": 1310720, 00:15:32.124 "uuid": "5017284f-f42b-4f7f-b2f2-95d11e36489b", 00:15:32.124 "numa_id": -1, 00:15:32.124 "assigned_rate_limits": { 00:15:32.124 "rw_ios_per_sec": 0, 00:15:32.124 "rw_mbytes_per_sec": 0, 00:15:32.124 "r_mbytes_per_sec": 0, 00:15:32.124 "w_mbytes_per_sec": 0 00:15:32.124 }, 00:15:32.124 "claimed": false, 00:15:32.124 "zoned": false, 00:15:32.124 "supported_io_types": { 00:15:32.124 "read": true, 00:15:32.124 "write": true, 00:15:32.124 "unmap": true, 00:15:32.124 "flush": true, 00:15:32.124 "reset": true, 00:15:32.124 "nvme_admin": true, 00:15:32.124 "nvme_io": true, 00:15:32.124 "nvme_io_md": false, 00:15:32.124 "write_zeroes": true, 00:15:32.124 "zcopy": false, 00:15:32.124 "get_zone_info": false, 00:15:32.124 "zone_management": false, 00:15:32.124 "zone_append": false, 00:15:32.124 "compare": true, 00:15:32.124 "compare_and_write": false, 00:15:32.124 "abort": true, 00:15:32.124 "seek_hole": false, 00:15:32.124 "seek_data": false, 00:15:32.124 "copy": true, 00:15:32.124 "nvme_iov_md": false 00:15:32.124 }, 00:15:32.124 "driver_specific": { 00:15:32.124 "nvme": [ 00:15:32.124 { 00:15:32.124 "pci_address": "0000:00:11.0", 00:15:32.124 "trid": { 00:15:32.124 "trtype": "PCIe", 00:15:32.124 "traddr": "0000:00:11.0" 00:15:32.124 }, 00:15:32.124 "ctrlr_data": { 00:15:32.124 "cntlid": 0, 00:15:32.124 "vendor_id": "0x1b36", 00:15:32.124 "model_number": "QEMU NVMe Ctrl", 00:15:32.124 "serial_number": "12341", 00:15:32.124 "firmware_revision": "8.0.0", 00:15:32.124 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:32.124 "oacs": { 00:15:32.124 "security": 0, 00:15:32.124 "format": 1, 00:15:32.124 "firmware": 0, 00:15:32.124 "ns_manage": 1 00:15:32.124 }, 00:15:32.124 "multi_ctrlr": false, 00:15:32.124 "ana_reporting": false 00:15:32.124 }, 00:15:32.124 "vs": { 00:15:32.124 "nvme_version": "1.4" 00:15:32.124 }, 00:15:32.124 "ns_data": { 00:15:32.124 "id": 1, 00:15:32.124 "can_share": false 00:15:32.124 } 00:15:32.124 } 00:15:32.124 ], 00:15:32.124 "mp_policy": "active_passive" 00:15:32.124 } 00:15:32.124 } 00:15:32.124 ]' 00:15:32.124 04:21:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:32.124 04:21:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:32.124 04:21:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:32.124 04:21:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:15:32.124 04:21:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:15:32.124 04:21:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:15:32.124 04:21:17 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:32.124 04:21:17 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:32.124 04:21:17 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:32.124 04:21:17 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:32.124 04:21:17 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:32.383 04:21:17 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:32.383 04:21:17 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:32.641 04:21:18 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=ef11f593-e71c-47d1-959b-31ea2465a452 00:15:32.641 04:21:18 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ef11f593-e71c-47d1-959b-31ea2465a452 00:15:32.641 04:21:18 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=a81301bf-4a33-4707-93cd-ef27865b96fb 00:15:32.641 04:21:18 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a81301bf-4a33-4707-93cd-ef27865b96fb 00:15:32.641 04:21:18 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:32.641 04:21:18 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:32.641 04:21:18 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=a81301bf-4a33-4707-93cd-ef27865b96fb 00:15:32.641 04:21:18 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:32.641 04:21:18 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size a81301bf-4a33-4707-93cd-ef27865b96fb 00:15:32.641 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a81301bf-4a33-4707-93cd-ef27865b96fb 00:15:32.641 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:32.641 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:32.641 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:32.641 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a81301bf-4a33-4707-93cd-ef27865b96fb 00:15:32.900 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:32.900 { 00:15:32.900 "name": "a81301bf-4a33-4707-93cd-ef27865b96fb", 00:15:32.900 "aliases": [ 00:15:32.900 "lvs/nvme0n1p0" 00:15:32.900 ], 00:15:32.900 "product_name": "Logical Volume", 00:15:32.900 "block_size": 4096, 00:15:32.900 "num_blocks": 26476544, 00:15:32.900 "uuid": "a81301bf-4a33-4707-93cd-ef27865b96fb", 00:15:32.900 "assigned_rate_limits": { 00:15:32.900 "rw_ios_per_sec": 0, 00:15:32.900 "rw_mbytes_per_sec": 0, 00:15:32.900 "r_mbytes_per_sec": 0, 00:15:32.900 "w_mbytes_per_sec": 0 00:15:32.900 }, 00:15:32.900 "claimed": false, 00:15:32.900 "zoned": false, 00:15:32.900 "supported_io_types": { 00:15:32.900 "read": true, 00:15:32.900 "write": true, 00:15:32.900 "unmap": true, 00:15:32.900 "flush": false, 00:15:32.900 "reset": true, 00:15:32.900 "nvme_admin": false, 00:15:32.900 "nvme_io": false, 00:15:32.900 "nvme_io_md": false, 00:15:32.900 "write_zeroes": true, 00:15:32.900 "zcopy": false, 00:15:32.900 "get_zone_info": false, 00:15:32.900 "zone_management": false, 00:15:32.900 "zone_append": false, 00:15:32.900 "compare": false, 00:15:32.900 "compare_and_write": false, 00:15:32.900 "abort": false, 00:15:32.900 "seek_hole": true, 00:15:32.900 "seek_data": true, 00:15:32.900 "copy": false, 00:15:32.900 "nvme_iov_md": false 00:15:32.900 }, 00:15:32.900 "driver_specific": { 00:15:32.900 "lvol": { 00:15:32.900 "lvol_store_uuid": "ef11f593-e71c-47d1-959b-31ea2465a452", 00:15:32.900 "base_bdev": "nvme0n1", 00:15:32.900 "thin_provision": true, 00:15:32.900 "num_allocated_clusters": 0, 00:15:32.900 "snapshot": false, 00:15:32.901 "clone": false, 00:15:32.901 "esnap_clone": false 00:15:32.901 } 00:15:32.901 } 00:15:32.901 } 00:15:32.901 ]' 00:15:32.901 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:32.901 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:32.901 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:32.901 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:32.901 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:32.901 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:32.901 04:21:18 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:32.901 04:21:18 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:32.901 04:21:18 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:33.159 04:21:18 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:33.159 04:21:18 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:33.159 04:21:18 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size a81301bf-4a33-4707-93cd-ef27865b96fb 00:15:33.159 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a81301bf-4a33-4707-93cd-ef27865b96fb 00:15:33.159 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:33.159 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:33.159 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:33.159 04:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a81301bf-4a33-4707-93cd-ef27865b96fb 00:15:33.418 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:33.418 { 00:15:33.418 "name": "a81301bf-4a33-4707-93cd-ef27865b96fb", 00:15:33.418 "aliases": [ 00:15:33.418 "lvs/nvme0n1p0" 00:15:33.418 ], 00:15:33.418 "product_name": "Logical Volume", 00:15:33.418 "block_size": 4096, 00:15:33.418 "num_blocks": 26476544, 00:15:33.418 "uuid": "a81301bf-4a33-4707-93cd-ef27865b96fb", 00:15:33.418 "assigned_rate_limits": { 00:15:33.418 "rw_ios_per_sec": 0, 00:15:33.418 "rw_mbytes_per_sec": 0, 00:15:33.418 "r_mbytes_per_sec": 0, 00:15:33.418 "w_mbytes_per_sec": 0 00:15:33.418 }, 00:15:33.418 "claimed": false, 00:15:33.418 "zoned": false, 00:15:33.418 "supported_io_types": { 00:15:33.418 "read": true, 00:15:33.418 "write": true, 00:15:33.418 "unmap": true, 00:15:33.418 "flush": false, 00:15:33.418 "reset": true, 00:15:33.418 "nvme_admin": false, 00:15:33.418 "nvme_io": false, 00:15:33.418 "nvme_io_md": false, 00:15:33.418 "write_zeroes": true, 00:15:33.418 "zcopy": false, 00:15:33.418 "get_zone_info": false, 00:15:33.418 "zone_management": false, 00:15:33.418 "zone_append": false, 00:15:33.418 "compare": false, 00:15:33.418 "compare_and_write": false, 00:15:33.418 "abort": false, 00:15:33.418 "seek_hole": true, 00:15:33.418 "seek_data": true, 00:15:33.418 "copy": false, 00:15:33.418 "nvme_iov_md": false 00:15:33.418 }, 00:15:33.418 "driver_specific": { 00:15:33.418 "lvol": { 00:15:33.418 "lvol_store_uuid": "ef11f593-e71c-47d1-959b-31ea2465a452", 00:15:33.418 "base_bdev": "nvme0n1", 00:15:33.418 "thin_provision": true, 00:15:33.418 "num_allocated_clusters": 0, 00:15:33.418 "snapshot": false, 00:15:33.418 "clone": false, 00:15:33.418 "esnap_clone": false 00:15:33.418 } 00:15:33.418 } 00:15:33.418 } 00:15:33.418 ]' 00:15:33.418 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:33.418 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:33.418 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:33.418 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:33.418 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:33.418 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:33.418 04:21:19 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:33.418 04:21:19 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:33.677 04:21:19 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:33.677 04:21:19 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:33.677 04:21:19 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:33.677 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:33.677 04:21:19 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size a81301bf-4a33-4707-93cd-ef27865b96fb 00:15:33.677 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a81301bf-4a33-4707-93cd-ef27865b96fb 00:15:33.677 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:33.677 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:33.677 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:33.677 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a81301bf-4a33-4707-93cd-ef27865b96fb 00:15:33.936 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:33.936 { 00:15:33.936 "name": "a81301bf-4a33-4707-93cd-ef27865b96fb", 00:15:33.936 "aliases": [ 00:15:33.936 "lvs/nvme0n1p0" 00:15:33.936 ], 00:15:33.936 "product_name": "Logical Volume", 00:15:33.936 "block_size": 4096, 00:15:33.936 "num_blocks": 26476544, 00:15:33.936 "uuid": "a81301bf-4a33-4707-93cd-ef27865b96fb", 00:15:33.936 "assigned_rate_limits": { 00:15:33.936 "rw_ios_per_sec": 0, 00:15:33.936 "rw_mbytes_per_sec": 0, 00:15:33.936 "r_mbytes_per_sec": 0, 00:15:33.936 "w_mbytes_per_sec": 0 00:15:33.936 }, 00:15:33.936 "claimed": false, 00:15:33.936 "zoned": false, 00:15:33.936 "supported_io_types": { 00:15:33.936 "read": true, 00:15:33.936 "write": true, 00:15:33.936 "unmap": true, 00:15:33.936 "flush": false, 00:15:33.936 "reset": true, 00:15:33.936 "nvme_admin": false, 00:15:33.936 "nvme_io": false, 00:15:33.936 "nvme_io_md": false, 00:15:33.936 "write_zeroes": true, 00:15:33.936 "zcopy": false, 00:15:33.936 "get_zone_info": false, 00:15:33.936 "zone_management": false, 00:15:33.936 "zone_append": false, 00:15:33.936 "compare": false, 00:15:33.936 "compare_and_write": false, 00:15:33.936 "abort": false, 00:15:33.936 "seek_hole": true, 00:15:33.936 "seek_data": true, 00:15:33.936 "copy": false, 00:15:33.936 "nvme_iov_md": false 00:15:33.936 }, 00:15:33.936 "driver_specific": { 00:15:33.936 "lvol": { 00:15:33.936 "lvol_store_uuid": "ef11f593-e71c-47d1-959b-31ea2465a452", 00:15:33.936 "base_bdev": "nvme0n1", 00:15:33.936 "thin_provision": true, 00:15:33.936 "num_allocated_clusters": 0, 00:15:33.936 "snapshot": false, 00:15:33.936 "clone": false, 00:15:33.936 "esnap_clone": false 00:15:33.936 } 00:15:33.936 } 00:15:33.936 } 00:15:33.936 ]' 00:15:33.936 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:33.936 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:33.936 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:33.936 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:33.936 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:33.936 04:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:33.936 04:21:19 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:33.936 04:21:19 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:33.936 04:21:19 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a81301bf-4a33-4707-93cd-ef27865b96fb -c nvc0n1p0 --l2p_dram_limit 60 00:15:34.195 [2024-11-17 04:21:19.768613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.195 [2024-11-17 04:21:19.768655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:34.195 [2024-11-17 04:21:19.768666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:34.195 [2024-11-17 04:21:19.768674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.195 [2024-11-17 04:21:19.768722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.195 [2024-11-17 04:21:19.768731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:34.195 [2024-11-17 04:21:19.768747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:15:34.195 [2024-11-17 04:21:19.768756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.195 [2024-11-17 04:21:19.768788] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:34.195 [2024-11-17 04:21:19.769070] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:34.195 [2024-11-17 04:21:19.769090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.195 [2024-11-17 04:21:19.769100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:34.195 [2024-11-17 04:21:19.769107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:15:34.195 [2024-11-17 04:21:19.769116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.195 [2024-11-17 04:21:19.769220] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5600b262-1d49-4180-a03f-2a45c77eeb57 00:15:34.195 [2024-11-17 04:21:19.770521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.195 [2024-11-17 04:21:19.770546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:34.195 [2024-11-17 04:21:19.770555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:15:34.195 [2024-11-17 04:21:19.770562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.195 [2024-11-17 04:21:19.777361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.195 [2024-11-17 04:21:19.777399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:34.195 [2024-11-17 04:21:19.777419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.730 ms 00:15:34.195 [2024-11-17 04:21:19.777436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.195 [2024-11-17 04:21:19.777523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.195 [2024-11-17 04:21:19.777532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:34.195 [2024-11-17 04:21:19.777541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:15:34.195 [2024-11-17 04:21:19.777547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.195 [2024-11-17 04:21:19.777613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.195 [2024-11-17 04:21:19.777629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:34.195 [2024-11-17 04:21:19.777637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:34.195 [2024-11-17 04:21:19.777644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.195 [2024-11-17 04:21:19.777682] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:34.195 [2024-11-17 04:21:19.779286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.195 [2024-11-17 04:21:19.779318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:34.195 [2024-11-17 04:21:19.779326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.610 ms 00:15:34.195 [2024-11-17 04:21:19.779334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.195 [2024-11-17 04:21:19.779383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.195 [2024-11-17 04:21:19.779393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:34.195 [2024-11-17 04:21:19.779400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:34.195 [2024-11-17 04:21:19.779410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.195 [2024-11-17 04:21:19.779444] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:34.195 [2024-11-17 04:21:19.779580] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:34.195 [2024-11-17 04:21:19.779596] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:34.195 [2024-11-17 04:21:19.779608] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:34.195 [2024-11-17 04:21:19.779616] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:34.195 [2024-11-17 04:21:19.779629] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:34.195 [2024-11-17 04:21:19.779636] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:34.195 [2024-11-17 04:21:19.779644] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:34.195 [2024-11-17 04:21:19.779651] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:34.195 [2024-11-17 04:21:19.779658] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:34.195 [2024-11-17 04:21:19.779665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.195 [2024-11-17 04:21:19.779672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:34.195 [2024-11-17 04:21:19.779687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:15:34.195 [2024-11-17 04:21:19.779695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.195 [2024-11-17 04:21:19.779781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.195 [2024-11-17 04:21:19.779791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:34.196 [2024-11-17 04:21:19.779799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:34.196 [2024-11-17 04:21:19.779806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.196 [2024-11-17 04:21:19.779912] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:34.196 [2024-11-17 04:21:19.779927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:34.196 [2024-11-17 04:21:19.779934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:34.196 [2024-11-17 04:21:19.779942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:34.196 [2024-11-17 04:21:19.779951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:34.196 [2024-11-17 04:21:19.779960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:34.196 [2024-11-17 04:21:19.779966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:34.196 [2024-11-17 04:21:19.779974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:34.196 [2024-11-17 04:21:19.779981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:34.196 [2024-11-17 04:21:19.779989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:34.196 [2024-11-17 04:21:19.779997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:34.196 [2024-11-17 04:21:19.780009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:34.196 [2024-11-17 04:21:19.780015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:34.196 [2024-11-17 04:21:19.780024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:34.196 [2024-11-17 04:21:19.780031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:34.196 [2024-11-17 04:21:19.780038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:34.196 [2024-11-17 04:21:19.780055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:34.196 [2024-11-17 04:21:19.780063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:34.196 [2024-11-17 04:21:19.780070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:34.196 [2024-11-17 04:21:19.780077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:34.196 [2024-11-17 04:21:19.780084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:34.196 [2024-11-17 04:21:19.780091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:34.196 [2024-11-17 04:21:19.780097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:34.196 [2024-11-17 04:21:19.780105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:34.196 [2024-11-17 04:21:19.780111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:34.196 [2024-11-17 04:21:19.780119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:34.196 [2024-11-17 04:21:19.780125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:34.196 [2024-11-17 04:21:19.780133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:34.196 [2024-11-17 04:21:19.780140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:34.196 [2024-11-17 04:21:19.780150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:34.196 [2024-11-17 04:21:19.780156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:34.196 [2024-11-17 04:21:19.780163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:34.196 [2024-11-17 04:21:19.780176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:34.196 [2024-11-17 04:21:19.780184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:34.196 [2024-11-17 04:21:19.780190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:34.196 [2024-11-17 04:21:19.780197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:34.196 [2024-11-17 04:21:19.780203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:34.196 [2024-11-17 04:21:19.780210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:34.196 [2024-11-17 04:21:19.780216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:34.196 [2024-11-17 04:21:19.780224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:34.196 [2024-11-17 04:21:19.780230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:34.196 [2024-11-17 04:21:19.780238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:34.196 [2024-11-17 04:21:19.780244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:34.196 [2024-11-17 04:21:19.780252] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:34.196 [2024-11-17 04:21:19.780260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:34.196 [2024-11-17 04:21:19.780271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:34.196 [2024-11-17 04:21:19.780280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:34.196 [2024-11-17 04:21:19.780288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:34.196 [2024-11-17 04:21:19.780294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:34.196 [2024-11-17 04:21:19.780301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:34.196 [2024-11-17 04:21:19.780307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:34.196 [2024-11-17 04:21:19.780314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:34.196 [2024-11-17 04:21:19.780321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:34.196 [2024-11-17 04:21:19.780332] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:34.196 [2024-11-17 04:21:19.780349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:34.196 [2024-11-17 04:21:19.780357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:34.196 [2024-11-17 04:21:19.780363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:34.196 [2024-11-17 04:21:19.780371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:34.196 [2024-11-17 04:21:19.780387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:34.196 [2024-11-17 04:21:19.780395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:34.196 [2024-11-17 04:21:19.780400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:34.196 [2024-11-17 04:21:19.780410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:34.196 [2024-11-17 04:21:19.780415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:34.196 [2024-11-17 04:21:19.780423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:34.196 [2024-11-17 04:21:19.780429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:34.196 [2024-11-17 04:21:19.780436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:34.196 [2024-11-17 04:21:19.780442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:34.196 [2024-11-17 04:21:19.780450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:34.196 [2024-11-17 04:21:19.780455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:34.196 [2024-11-17 04:21:19.780462] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:34.196 [2024-11-17 04:21:19.780469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:34.196 [2024-11-17 04:21:19.780477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:34.196 [2024-11-17 04:21:19.780483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:34.196 [2024-11-17 04:21:19.780491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:34.196 [2024-11-17 04:21:19.780497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:34.196 [2024-11-17 04:21:19.780506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.196 [2024-11-17 04:21:19.780512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:34.196 [2024-11-17 04:21:19.780523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:15:34.196 [2024-11-17 04:21:19.780529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.197 [2024-11-17 04:21:19.780587] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:34.197 [2024-11-17 04:21:19.780597] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:36.099 [2024-11-17 04:21:21.645929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.099 [2024-11-17 04:21:21.645981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:36.099 [2024-11-17 04:21:21.645996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1865.330 ms 00:15:36.099 [2024-11-17 04:21:21.646003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.099 [2024-11-17 04:21:21.656078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.099 [2024-11-17 04:21:21.656115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:36.099 [2024-11-17 04:21:21.656130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.989 ms 00:15:36.099 [2024-11-17 04:21:21.656137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.099 [2024-11-17 04:21:21.656265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.099 [2024-11-17 04:21:21.656281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:36.099 [2024-11-17 04:21:21.656291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:15:36.099 [2024-11-17 04:21:21.656305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.099 [2024-11-17 04:21:21.674178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.099 [2024-11-17 04:21:21.674218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:36.099 [2024-11-17 04:21:21.674232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.827 ms 00:15:36.099 [2024-11-17 04:21:21.674239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.099 [2024-11-17 04:21:21.674285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.099 [2024-11-17 04:21:21.674293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:36.099 [2024-11-17 04:21:21.674314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:36.099 [2024-11-17 04:21:21.674321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.099 [2024-11-17 04:21:21.674925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.099 [2024-11-17 04:21:21.674953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:36.099 [2024-11-17 04:21:21.674963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:15:36.099 [2024-11-17 04:21:21.674972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.099 [2024-11-17 04:21:21.675083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.099 [2024-11-17 04:21:21.675100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:36.099 [2024-11-17 04:21:21.675109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:15:36.099 [2024-11-17 04:21:21.675117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.099 [2024-11-17 04:21:21.681642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.099 [2024-11-17 04:21:21.681676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:36.099 [2024-11-17 04:21:21.681690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.486 ms 00:15:36.099 [2024-11-17 04:21:21.681699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.099 [2024-11-17 04:21:21.691771] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:36.099 [2024-11-17 04:21:21.706956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.099 [2024-11-17 04:21:21.706988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:36.099 [2024-11-17 04:21:21.706997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.178 ms 00:15:36.099 [2024-11-17 04:21:21.707007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.099 [2024-11-17 04:21:21.739411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.099 [2024-11-17 04:21:21.739445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:36.099 [2024-11-17 04:21:21.739454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.370 ms 00:15:36.099 [2024-11-17 04:21:21.739464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.099 [2024-11-17 04:21:21.739613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.099 [2024-11-17 04:21:21.739626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:36.099 [2024-11-17 04:21:21.739634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:15:36.099 [2024-11-17 04:21:21.739642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.100 [2024-11-17 04:21:21.742231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.100 [2024-11-17 04:21:21.742264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:36.100 [2024-11-17 04:21:21.742272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.539 ms 00:15:36.100 [2024-11-17 04:21:21.742279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.100 [2024-11-17 04:21:21.744219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.100 [2024-11-17 04:21:21.744249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:36.100 [2024-11-17 04:21:21.744256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.906 ms 00:15:36.100 [2024-11-17 04:21:21.744264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.100 [2024-11-17 04:21:21.744539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.100 [2024-11-17 04:21:21.744555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:36.100 [2024-11-17 04:21:21.744563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:15:36.100 [2024-11-17 04:21:21.744572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.100 [2024-11-17 04:21:21.765579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.100 [2024-11-17 04:21:21.765612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:36.100 [2024-11-17 04:21:21.765620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.984 ms 00:15:36.100 [2024-11-17 04:21:21.765637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.100 [2024-11-17 04:21:21.769124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.100 [2024-11-17 04:21:21.769157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:36.100 [2024-11-17 04:21:21.769165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.438 ms 00:15:36.100 [2024-11-17 04:21:21.769174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.100 [2024-11-17 04:21:21.771456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.100 [2024-11-17 04:21:21.771485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:36.100 [2024-11-17 04:21:21.771492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.238 ms 00:15:36.100 [2024-11-17 04:21:21.771500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.100 [2024-11-17 04:21:21.773938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.100 [2024-11-17 04:21:21.773969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:36.100 [2024-11-17 04:21:21.773976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.404 ms 00:15:36.100 [2024-11-17 04:21:21.773986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.100 [2024-11-17 04:21:21.774024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.100 [2024-11-17 04:21:21.774034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:36.100 [2024-11-17 04:21:21.774041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:36.100 [2024-11-17 04:21:21.774049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.100 [2024-11-17 04:21:21.774109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.100 [2024-11-17 04:21:21.774117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:36.100 [2024-11-17 04:21:21.774126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:36.100 [2024-11-17 04:21:21.774134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.100 [2024-11-17 04:21:21.775015] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2006.042 ms, result 0 00:15:36.100 { 00:15:36.100 "name": "ftl0", 00:15:36.100 "uuid": "5600b262-1d49-4180-a03f-2a45c77eeb57" 00:15:36.100 } 00:15:36.100 04:21:21 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:36.100 04:21:21 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:15:36.100 04:21:21 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.100 04:21:21 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:15:36.100 04:21:21 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.100 04:21:21 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.100 04:21:21 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:36.358 04:21:21 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:36.616 [ 00:15:36.616 { 00:15:36.616 "name": "ftl0", 00:15:36.616 "aliases": [ 00:15:36.616 "5600b262-1d49-4180-a03f-2a45c77eeb57" 00:15:36.616 ], 00:15:36.616 "product_name": "FTL disk", 00:15:36.616 "block_size": 4096, 00:15:36.616 "num_blocks": 20971520, 00:15:36.616 "uuid": "5600b262-1d49-4180-a03f-2a45c77eeb57", 00:15:36.616 "assigned_rate_limits": { 00:15:36.616 "rw_ios_per_sec": 0, 00:15:36.616 "rw_mbytes_per_sec": 0, 00:15:36.616 "r_mbytes_per_sec": 0, 00:15:36.616 "w_mbytes_per_sec": 0 00:15:36.616 }, 00:15:36.616 "claimed": false, 00:15:36.616 "zoned": false, 00:15:36.616 "supported_io_types": { 00:15:36.616 "read": true, 00:15:36.616 "write": true, 00:15:36.616 "unmap": true, 00:15:36.616 "flush": true, 00:15:36.616 "reset": false, 00:15:36.616 "nvme_admin": false, 00:15:36.616 "nvme_io": false, 00:15:36.616 "nvme_io_md": false, 00:15:36.616 "write_zeroes": true, 00:15:36.616 "zcopy": false, 00:15:36.616 "get_zone_info": false, 00:15:36.616 "zone_management": false, 00:15:36.616 "zone_append": false, 00:15:36.616 "compare": false, 00:15:36.616 "compare_and_write": false, 00:15:36.616 "abort": false, 00:15:36.616 "seek_hole": false, 00:15:36.616 "seek_data": false, 00:15:36.616 "copy": false, 00:15:36.616 "nvme_iov_md": false 00:15:36.616 }, 00:15:36.616 "driver_specific": { 00:15:36.616 "ftl": { 00:15:36.616 "base_bdev": "a81301bf-4a33-4707-93cd-ef27865b96fb", 00:15:36.616 "cache": "nvc0n1p0" 00:15:36.616 } 00:15:36.616 } 00:15:36.616 } 00:15:36.616 ] 00:15:36.616 04:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:15:36.616 04:21:22 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:36.616 04:21:22 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:36.616 04:21:22 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:36.616 04:21:22 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:36.876 [2024-11-17 04:21:22.398745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.876 [2024-11-17 04:21:22.398778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:36.876 [2024-11-17 04:21:22.398789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:36.876 [2024-11-17 04:21:22.398796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.876 [2024-11-17 04:21:22.398828] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:36.876 [2024-11-17 04:21:22.399351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.876 [2024-11-17 04:21:22.399401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:36.876 [2024-11-17 04:21:22.399412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:15:36.876 [2024-11-17 04:21:22.399420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.876 [2024-11-17 04:21:22.399801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.876 [2024-11-17 04:21:22.399832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:36.876 [2024-11-17 04:21:22.399840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:15:36.876 [2024-11-17 04:21:22.399848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.876 [2024-11-17 04:21:22.402260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.876 [2024-11-17 04:21:22.402281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:36.876 [2024-11-17 04:21:22.402288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.392 ms 00:15:36.876 [2024-11-17 04:21:22.402297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.876 [2024-11-17 04:21:22.406871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.876 [2024-11-17 04:21:22.406896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:36.876 [2024-11-17 04:21:22.406904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.551 ms 00:15:36.876 [2024-11-17 04:21:22.406912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.876 [2024-11-17 04:21:22.408457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.876 [2024-11-17 04:21:22.408492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:36.876 [2024-11-17 04:21:22.408499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:15:36.876 [2024-11-17 04:21:22.408506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.876 [2024-11-17 04:21:22.412290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.876 [2024-11-17 04:21:22.412320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:36.876 [2024-11-17 04:21:22.412334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.751 ms 00:15:36.876 [2024-11-17 04:21:22.412342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.876 [2024-11-17 04:21:22.412494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.876 [2024-11-17 04:21:22.412505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:36.876 [2024-11-17 04:21:22.412512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:15:36.876 [2024-11-17 04:21:22.412519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.876 [2024-11-17 04:21:22.413791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.876 [2024-11-17 04:21:22.413821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:36.876 [2024-11-17 04:21:22.413827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:15:36.876 [2024-11-17 04:21:22.413834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.877 [2024-11-17 04:21:22.414881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.877 [2024-11-17 04:21:22.414912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:36.877 [2024-11-17 04:21:22.414921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:15:36.877 [2024-11-17 04:21:22.414929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.877 [2024-11-17 04:21:22.415666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.877 [2024-11-17 04:21:22.415695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:36.877 [2024-11-17 04:21:22.415702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:15:36.877 [2024-11-17 04:21:22.415709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.877 [2024-11-17 04:21:22.416563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.877 [2024-11-17 04:21:22.416592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:36.877 [2024-11-17 04:21:22.416599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:15:36.877 [2024-11-17 04:21:22.416606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.877 [2024-11-17 04:21:22.416637] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:36.877 [2024-11-17 04:21:22.416652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.416998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:36.877 [2024-11-17 04:21:22.417196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:36.878 [2024-11-17 04:21:22.417362] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:36.878 [2024-11-17 04:21:22.417368] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5600b262-1d49-4180-a03f-2a45c77eeb57 00:15:36.878 [2024-11-17 04:21:22.417390] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:36.878 [2024-11-17 04:21:22.417405] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:36.878 [2024-11-17 04:21:22.417413] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:36.878 [2024-11-17 04:21:22.417427] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:36.878 [2024-11-17 04:21:22.417434] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:36.878 [2024-11-17 04:21:22.417448] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:36.878 [2024-11-17 04:21:22.417456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:36.878 [2024-11-17 04:21:22.417462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:36.878 [2024-11-17 04:21:22.417468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:36.878 [2024-11-17 04:21:22.417474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.878 [2024-11-17 04:21:22.417481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:36.878 [2024-11-17 04:21:22.417488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:15:36.878 [2024-11-17 04:21:22.417495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.878 [2024-11-17 04:21:22.418876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.878 [2024-11-17 04:21:22.418899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:36.878 [2024-11-17 04:21:22.418907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.357 ms 00:15:36.878 [2024-11-17 04:21:22.418915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.878 [2024-11-17 04:21:22.418988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.878 [2024-11-17 04:21:22.418997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:36.878 [2024-11-17 04:21:22.419004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:36.878 [2024-11-17 04:21:22.419022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.878 [2024-11-17 04:21:22.424964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.878 [2024-11-17 04:21:22.424994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:36.878 [2024-11-17 04:21:22.425002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.878 [2024-11-17 04:21:22.425011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.878 [2024-11-17 04:21:22.425062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.878 [2024-11-17 04:21:22.425070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:36.878 [2024-11-17 04:21:22.425077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.878 [2024-11-17 04:21:22.425087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.878 [2024-11-17 04:21:22.425143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.878 [2024-11-17 04:21:22.425158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:36.878 [2024-11-17 04:21:22.425166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.878 [2024-11-17 04:21:22.425174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.878 [2024-11-17 04:21:22.425194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.878 [2024-11-17 04:21:22.425211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:36.878 [2024-11-17 04:21:22.425217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.878 [2024-11-17 04:21:22.425225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.878 [2024-11-17 04:21:22.436005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.878 [2024-11-17 04:21:22.436046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:36.878 [2024-11-17 04:21:22.436054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.878 [2024-11-17 04:21:22.436062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.878 [2024-11-17 04:21:22.445004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.878 [2024-11-17 04:21:22.445037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:36.878 [2024-11-17 04:21:22.445047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.878 [2024-11-17 04:21:22.445055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.878 [2024-11-17 04:21:22.445127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.878 [2024-11-17 04:21:22.445139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:36.878 [2024-11-17 04:21:22.445156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.878 [2024-11-17 04:21:22.445163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.878 [2024-11-17 04:21:22.445224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.878 [2024-11-17 04:21:22.445234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:36.878 [2024-11-17 04:21:22.445241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.878 [2024-11-17 04:21:22.445249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.878 [2024-11-17 04:21:22.445327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.878 [2024-11-17 04:21:22.445343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:36.878 [2024-11-17 04:21:22.445351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.878 [2024-11-17 04:21:22.445359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.878 [2024-11-17 04:21:22.445475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.878 [2024-11-17 04:21:22.445499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:36.878 [2024-11-17 04:21:22.445506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.878 [2024-11-17 04:21:22.445513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.878 [2024-11-17 04:21:22.445554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.878 [2024-11-17 04:21:22.445567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:36.878 [2024-11-17 04:21:22.445574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.878 [2024-11-17 04:21:22.445582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.878 [2024-11-17 04:21:22.445627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.878 [2024-11-17 04:21:22.445637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:36.878 [2024-11-17 04:21:22.445644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.878 [2024-11-17 04:21:22.445663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.878 [2024-11-17 04:21:22.445810] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.038 ms, result 0 00:15:36.878 true 00:15:36.878 04:21:22 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 83840 00:15:36.878 04:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 83840 ']' 00:15:36.878 04:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 83840 00:15:36.878 04:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:15:36.878 04:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.878 04:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83840 00:15:36.878 04:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.878 04:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.878 killing process with pid 83840 00:15:36.878 04:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83840' 00:15:36.878 04:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 83840 00:15:36.878 04:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 83840 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:42.145 04:21:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:42.145 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:42.145 fio-3.35 00:15:42.145 Starting 1 thread 00:15:47.438 00:15:47.438 test: (groupid=0, jobs=1): err= 0: pid=83991: Sun Nov 17 04:21:33 2024 00:15:47.438 read: IOPS=789, BW=52.4MiB/s (55.0MB/s)(255MiB/4855msec) 00:15:47.438 slat (usec): min=2, max=125, avg= 5.38, stdev= 3.52 00:15:47.438 clat (usec): min=277, max=6213, avg=571.81, stdev=181.41 00:15:47.438 lat (usec): min=281, max=6226, avg=577.19, stdev=181.81 00:15:47.438 clat percentiles (usec): 00:15:47.438 | 1.00th=[ 367], 5.00th=[ 420], 10.00th=[ 449], 20.00th=[ 474], 00:15:47.438 | 30.00th=[ 502], 40.00th=[ 515], 50.00th=[ 523], 60.00th=[ 537], 00:15:47.438 | 70.00th=[ 570], 80.00th=[ 603], 90.00th=[ 840], 95.00th=[ 914], 00:15:47.438 | 99.00th=[ 1123], 99.50th=[ 1172], 99.90th=[ 1418], 99.95th=[ 1860], 00:15:47.438 | 99.99th=[ 6194] 00:15:47.438 write: IOPS=795, BW=52.8MiB/s (55.4MB/s)(256MiB/4850msec); 0 zone resets 00:15:47.438 slat (nsec): min=13492, max=99142, avg=24967.62, stdev=7913.06 00:15:47.438 clat (usec): min=313, max=2125, avg=646.32, stdev=162.20 00:15:47.438 lat (usec): min=335, max=2142, avg=671.29, stdev=163.01 00:15:47.438 clat percentiles (usec): 00:15:47.438 | 1.00th=[ 408], 5.00th=[ 474], 10.00th=[ 529], 20.00th=[ 545], 00:15:47.438 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 603], 60.00th=[ 619], 00:15:47.438 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 914], 95.00th=[ 979], 00:15:47.438 | 99.00th=[ 1221], 99.50th=[ 1303], 99.90th=[ 1795], 99.95th=[ 2073], 00:15:47.438 | 99.99th=[ 2114] 00:15:47.438 bw ( KiB/s): min=50320, max=59024, per=98.92%, avg=53478.22, stdev=3018.75, samples=9 00:15:47.438 iops : min= 740, max= 868, avg=786.44, stdev=44.39, samples=9 00:15:47.438 lat (usec) : 500=18.31%, 750=67.20%, 1000=11.05% 00:15:47.438 lat (msec) : 2=3.39%, 4=0.03%, 10=0.01% 00:15:47.438 cpu : usr=99.09%, sys=0.08%, ctx=10, majf=0, minf=1181 00:15:47.438 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.438 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.438 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.438 00:15:47.438 Run status group 0 (all jobs): 00:15:47.438 READ: bw=52.4MiB/s (55.0MB/s), 52.4MiB/s-52.4MiB/s (55.0MB/s-55.0MB/s), io=255MiB (267MB), run=4855-4855msec 00:15:47.438 WRITE: bw=52.8MiB/s (55.4MB/s), 52.8MiB/s-52.8MiB/s (55.4MB/s-55.4MB/s), io=256MiB (269MB), run=4850-4850msec 00:15:48.379 ----------------------------------------------------- 00:15:48.379 Suppressions used: 00:15:48.379 count bytes template 00:15:48.379 1 5 /usr/src/fio/parse.c 00:15:48.379 1 8 libtcmalloc_minimal.so 00:15:48.379 1 904 libcrypto.so 00:15:48.379 ----------------------------------------------------- 00:15:48.379 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:48.379 04:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:48.379 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:48.379 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:48.379 fio-3.35 00:15:48.379 Starting 2 threads 00:16:10.404 00:16:10.404 first_half: (groupid=0, jobs=1): err= 0: pid=84094: Sun Nov 17 04:21:55 2024 00:16:10.404 read: IOPS=3122, BW=12.2MiB/s (12.8MB/s)(255MiB/20913msec) 00:16:10.404 slat (nsec): min=2958, max=21394, avg=3835.48, stdev=886.38 00:16:10.404 clat (usec): min=541, max=283269, avg=33568.37, stdev=15864.08 00:16:10.404 lat (usec): min=545, max=283273, avg=33572.21, stdev=15864.20 00:16:10.404 clat percentiles (msec): 00:16:10.404 | 1.00th=[ 14], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 30], 00:16:10.404 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:16:10.404 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 39], 95.00th=[ 49], 00:16:10.404 | 99.00th=[ 123], 99.50th=[ 142], 99.90th=[ 165], 99.95th=[ 232], 00:16:10.404 | 99.99th=[ 275] 00:16:10.404 write: IOPS=3809, BW=14.9MiB/s (15.6MB/s)(256MiB/17202msec); 0 zone resets 00:16:10.404 slat (usec): min=3, max=213, avg= 5.52, stdev= 2.58 00:16:10.404 clat (usec): min=343, max=61971, avg=7373.86, stdev=11512.60 00:16:10.404 lat (usec): min=357, max=61976, avg=7379.37, stdev=11512.65 00:16:10.404 clat percentiles (usec): 00:16:10.404 | 1.00th=[ 611], 5.00th=[ 742], 10.00th=[ 848], 20.00th=[ 1123], 00:16:10.404 | 30.00th=[ 2606], 40.00th=[ 3490], 50.00th=[ 4228], 60.00th=[ 4817], 00:16:10.404 | 70.00th=[ 5604], 80.00th=[ 8848], 90.00th=[11338], 95.00th=[32375], 00:16:10.404 | 99.00th=[56361], 99.50th=[57934], 99.90th=[60031], 99.95th=[60556], 00:16:10.404 | 99.99th=[61604] 00:16:10.404 bw ( KiB/s): min= 744, max=43528, per=100.00%, avg=29122.89, stdev=14390.94, samples=18 00:16:10.404 iops : min= 186, max=10882, avg=7280.72, stdev=3597.74, samples=18 00:16:10.404 lat (usec) : 500=0.06%, 750=2.67%, 1000=5.72% 00:16:10.404 lat (msec) : 2=4.14%, 4=11.25%, 10=19.09%, 20=4.38%, 50=48.17% 00:16:10.404 lat (msec) : 100=3.71%, 250=0.78%, 500=0.02% 00:16:10.404 cpu : usr=99.29%, sys=0.13%, ctx=47, majf=0, minf=5577 00:16:10.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:10.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.404 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:10.404 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.404 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:10.404 second_half: (groupid=0, jobs=1): err= 0: pid=84095: Sun Nov 17 04:21:55 2024 00:16:10.404 read: IOPS=3099, BW=12.1MiB/s (12.7MB/s)(255MiB/21071msec) 00:16:10.404 slat (nsec): min=3070, max=21545, avg=5436.60, stdev=893.02 00:16:10.404 clat (usec): min=626, max=288543, avg=33002.62, stdev=17680.10 00:16:10.404 lat (usec): min=632, max=288549, avg=33008.06, stdev=17680.18 00:16:10.404 clat percentiles (msec): 00:16:10.404 | 1.00th=[ 8], 5.00th=[ 25], 10.00th=[ 27], 20.00th=[ 30], 00:16:10.404 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:16:10.404 | 70.00th=[ 31], 80.00th=[ 32], 90.00th=[ 36], 95.00th=[ 44], 00:16:10.404 | 99.00th=[ 133], 99.50th=[ 150], 99.90th=[ 182], 99.95th=[ 201], 00:16:10.404 | 99.99th=[ 284] 00:16:10.404 write: IOPS=3442, BW=13.4MiB/s (14.1MB/s)(256MiB/19039msec); 0 zone resets 00:16:10.404 slat (usec): min=3, max=201, avg= 6.58, stdev= 2.71 00:16:10.404 clat (usec): min=360, max=61992, avg=8242.34, stdev=12621.56 00:16:10.404 lat (usec): min=367, max=61997, avg=8248.92, stdev=12621.64 00:16:10.404 clat percentiles (usec): 00:16:10.404 | 1.00th=[ 594], 5.00th=[ 750], 10.00th=[ 848], 20.00th=[ 1012], 00:16:10.404 | 30.00th=[ 1319], 40.00th=[ 2638], 50.00th=[ 3589], 60.00th=[ 4883], 00:16:10.404 | 70.00th=[ 6194], 80.00th=[10159], 90.00th=[26608], 95.00th=[39584], 00:16:10.404 | 99.00th=[55837], 99.50th=[57934], 99.90th=[60031], 99.95th=[61080], 00:16:10.404 | 99.99th=[61604] 00:16:10.404 bw ( KiB/s): min= 408, max=60616, per=86.54%, avg=23832.41, stdev=20086.97, samples=22 00:16:10.404 iops : min= 102, max=15154, avg=5958.05, stdev=5021.71, samples=22 00:16:10.404 lat (usec) : 500=0.06%, 750=2.46%, 1000=7.20% 00:16:10.404 lat (msec) : 2=7.88%, 4=9.49%, 10=14.48%, 20=4.51%, 50=49.52% 00:16:10.404 lat (msec) : 100=3.34%, 250=1.07%, 500=0.01% 00:16:10.404 cpu : usr=99.38%, sys=0.10%, ctx=35, majf=0, minf=5559 00:16:10.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:10.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.404 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:10.404 issued rwts: total=65312,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.404 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:10.404 00:16:10.404 Run status group 0 (all jobs): 00:16:10.404 READ: bw=24.2MiB/s (25.4MB/s), 12.1MiB/s-12.2MiB/s (12.7MB/s-12.8MB/s), io=510MiB (535MB), run=20913-21071msec 00:16:10.404 WRITE: bw=26.9MiB/s (28.2MB/s), 13.4MiB/s-14.9MiB/s (14.1MB/s-15.6MB/s), io=512MiB (537MB), run=17202-19039msec 00:16:11.347 ----------------------------------------------------- 00:16:11.347 Suppressions used: 00:16:11.347 count bytes template 00:16:11.347 2 10 /usr/src/fio/parse.c 00:16:11.347 4 384 /usr/src/fio/iolog.c 00:16:11.347 1 8 libtcmalloc_minimal.so 00:16:11.347 1 904 libcrypto.so 00:16:11.347 ----------------------------------------------------- 00:16:11.347 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:11.347 04:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:11.608 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:11.608 fio-3.35 00:16:11.608 Starting 1 thread 00:16:26.503 00:16:26.503 test: (groupid=0, jobs=1): err= 0: pid=84363: Sun Nov 17 04:22:10 2024 00:16:26.503 read: IOPS=8357, BW=32.6MiB/s (34.2MB/s)(255MiB/7802msec) 00:16:26.503 slat (nsec): min=2991, max=20756, avg=4348.41, stdev=1115.16 00:16:26.503 clat (usec): min=499, max=32030, avg=15307.45, stdev=1656.23 00:16:26.503 lat (usec): min=504, max=32034, avg=15311.80, stdev=1656.18 00:16:26.503 clat percentiles (usec): 00:16:26.503 | 1.00th=[13435], 5.00th=[13566], 10.00th=[13829], 20.00th=[14222], 00:16:26.503 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15270], 60.00th=[15401], 00:16:26.503 | 70.00th=[15533], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:16:26.503 | 99.00th=[22938], 99.50th=[24249], 99.90th=[29754], 99.95th=[31589], 00:16:26.503 | 99.99th=[31851] 00:16:26.503 write: IOPS=13.5k, BW=52.7MiB/s (55.3MB/s)(256MiB/4856msec); 0 zone resets 00:16:26.503 slat (usec): min=4, max=782, avg= 7.04, stdev= 4.97 00:16:26.503 clat (usec): min=473, max=45000, avg=9440.02, stdev=10703.88 00:16:26.503 lat (usec): min=478, max=45006, avg=9447.06, stdev=10703.89 00:16:26.503 clat percentiles (usec): 00:16:26.503 | 1.00th=[ 668], 5.00th=[ 799], 10.00th=[ 889], 20.00th=[ 1106], 00:16:26.503 | 30.00th=[ 1483], 40.00th=[ 2212], 50.00th=[ 6128], 60.00th=[ 7635], 00:16:26.503 | 70.00th=[ 9896], 80.00th=[15795], 90.00th=[27132], 95.00th=[35914], 00:16:26.503 | 99.00th=[39584], 99.50th=[40633], 99.90th=[42206], 99.95th=[43254], 00:16:26.503 | 99.99th=[44827] 00:16:26.503 bw ( KiB/s): min=30984, max=70435, per=97.09%, avg=52414.70, stdev=11274.89, samples=10 00:16:26.503 iops : min= 7746, max=17608, avg=13103.60, stdev=2818.59, samples=10 00:16:26.503 lat (usec) : 500=0.01%, 750=1.53%, 1000=6.49% 00:16:26.503 lat (msec) : 2=11.18%, 4=1.93%, 10=14.25%, 20=55.19%, 50=9.43% 00:16:26.503 cpu : usr=99.08%, sys=0.17%, ctx=32, majf=0, minf=5577 00:16:26.503 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:26.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.503 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.503 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.503 00:16:26.503 Run status group 0 (all jobs): 00:16:26.503 READ: bw=32.6MiB/s (34.2MB/s), 32.6MiB/s-32.6MiB/s (34.2MB/s-34.2MB/s), io=255MiB (267MB), run=7802-7802msec 00:16:26.503 WRITE: bw=52.7MiB/s (55.3MB/s), 52.7MiB/s-52.7MiB/s (55.3MB/s-55.3MB/s), io=256MiB (268MB), run=4856-4856msec 00:16:26.503 ----------------------------------------------------- 00:16:26.503 Suppressions used: 00:16:26.503 count bytes template 00:16:26.503 1 5 /usr/src/fio/parse.c 00:16:26.503 2 192 /usr/src/fio/iolog.c 00:16:26.503 1 8 libtcmalloc_minimal.so 00:16:26.503 1 904 libcrypto.so 00:16:26.503 ----------------------------------------------------- 00:16:26.503 00:16:26.503 04:22:11 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:26.503 04:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:26.503 04:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:26.503 04:22:11 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:26.503 Remove shared memory files 00:16:26.503 04:22:11 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:26.503 04:22:11 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:26.503 04:22:11 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:26.503 04:22:11 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:26.503 04:22:11 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69382 /dev/shm/spdk_tgt_trace.pid82781 00:16:26.503 04:22:11 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:26.503 04:22:11 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:26.503 00:16:26.503 real 0m55.394s 00:16:26.503 user 2m1.754s 00:16:26.503 sys 0m2.591s 00:16:26.503 04:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.503 ************************************ 00:16:26.503 END TEST ftl_fio_basic 00:16:26.503 ************************************ 00:16:26.503 04:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:26.503 04:22:11 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:26.503 04:22:11 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:26.503 04:22:11 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.503 04:22:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:26.503 ************************************ 00:16:26.503 START TEST ftl_bdevperf 00:16:26.503 ************************************ 00:16:26.503 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:26.503 * Looking for test storage... 00:16:26.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:26.503 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:26.503 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:26.503 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:26.503 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:26.503 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:26.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.504 --rc genhtml_branch_coverage=1 00:16:26.504 --rc genhtml_function_coverage=1 00:16:26.504 --rc genhtml_legend=1 00:16:26.504 --rc geninfo_all_blocks=1 00:16:26.504 --rc geninfo_unexecuted_blocks=1 00:16:26.504 00:16:26.504 ' 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:26.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.504 --rc genhtml_branch_coverage=1 00:16:26.504 --rc genhtml_function_coverage=1 00:16:26.504 --rc genhtml_legend=1 00:16:26.504 --rc geninfo_all_blocks=1 00:16:26.504 --rc geninfo_unexecuted_blocks=1 00:16:26.504 00:16:26.504 ' 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:26.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.504 --rc genhtml_branch_coverage=1 00:16:26.504 --rc genhtml_function_coverage=1 00:16:26.504 --rc genhtml_legend=1 00:16:26.504 --rc geninfo_all_blocks=1 00:16:26.504 --rc geninfo_unexecuted_blocks=1 00:16:26.504 00:16:26.504 ' 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:26.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.504 --rc genhtml_branch_coverage=1 00:16:26.504 --rc genhtml_function_coverage=1 00:16:26.504 --rc genhtml_legend=1 00:16:26.504 --rc geninfo_all_blocks=1 00:16:26.504 --rc geninfo_unexecuted_blocks=1 00:16:26.504 00:16:26.504 ' 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=84590 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 84590 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 84590 ']' 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:26.504 04:22:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:26.504 [2024-11-17 04:22:11.808535] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:16:26.504 [2024-11-17 04:22:11.808675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84590 ] 00:16:26.504 [2024-11-17 04:22:11.970997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.504 [2024-11-17 04:22:12.003071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.073 04:22:12 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.073 04:22:12 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:16:27.073 04:22:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:27.073 04:22:12 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:27.073 04:22:12 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:27.073 04:22:12 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:27.073 04:22:12 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:27.073 04:22:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:27.334 04:22:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:27.334 04:22:12 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:27.334 04:22:12 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:27.334 04:22:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:27.334 04:22:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:27.334 04:22:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:27.334 04:22:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:27.334 04:22:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:27.596 04:22:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:27.596 { 00:16:27.596 "name": "nvme0n1", 00:16:27.596 "aliases": [ 00:16:27.596 "255ab0ad-ff2f-461e-b6e2-9e154a7fb153" 00:16:27.596 ], 00:16:27.596 "product_name": "NVMe disk", 00:16:27.596 "block_size": 4096, 00:16:27.596 "num_blocks": 1310720, 00:16:27.596 "uuid": "255ab0ad-ff2f-461e-b6e2-9e154a7fb153", 00:16:27.596 "numa_id": -1, 00:16:27.596 "assigned_rate_limits": { 00:16:27.596 "rw_ios_per_sec": 0, 00:16:27.596 "rw_mbytes_per_sec": 0, 00:16:27.596 "r_mbytes_per_sec": 0, 00:16:27.596 "w_mbytes_per_sec": 0 00:16:27.596 }, 00:16:27.596 "claimed": true, 00:16:27.596 "claim_type": "read_many_write_one", 00:16:27.596 "zoned": false, 00:16:27.596 "supported_io_types": { 00:16:27.596 "read": true, 00:16:27.596 "write": true, 00:16:27.596 "unmap": true, 00:16:27.596 "flush": true, 00:16:27.596 "reset": true, 00:16:27.596 "nvme_admin": true, 00:16:27.596 "nvme_io": true, 00:16:27.596 "nvme_io_md": false, 00:16:27.596 "write_zeroes": true, 00:16:27.596 "zcopy": false, 00:16:27.596 "get_zone_info": false, 00:16:27.596 "zone_management": false, 00:16:27.596 "zone_append": false, 00:16:27.596 "compare": true, 00:16:27.596 "compare_and_write": false, 00:16:27.596 "abort": true, 00:16:27.596 "seek_hole": false, 00:16:27.596 "seek_data": false, 00:16:27.596 "copy": true, 00:16:27.596 "nvme_iov_md": false 00:16:27.596 }, 00:16:27.596 "driver_specific": { 00:16:27.596 "nvme": [ 00:16:27.596 { 00:16:27.596 "pci_address": "0000:00:11.0", 00:16:27.596 "trid": { 00:16:27.596 "trtype": "PCIe", 00:16:27.596 "traddr": "0000:00:11.0" 00:16:27.596 }, 00:16:27.596 "ctrlr_data": { 00:16:27.596 "cntlid": 0, 00:16:27.596 "vendor_id": "0x1b36", 00:16:27.596 "model_number": "QEMU NVMe Ctrl", 00:16:27.596 "serial_number": "12341", 00:16:27.596 "firmware_revision": "8.0.0", 00:16:27.596 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:27.596 "oacs": { 00:16:27.596 "security": 0, 00:16:27.596 "format": 1, 00:16:27.596 "firmware": 0, 00:16:27.596 "ns_manage": 1 00:16:27.596 }, 00:16:27.596 "multi_ctrlr": false, 00:16:27.596 "ana_reporting": false 00:16:27.596 }, 00:16:27.596 "vs": { 00:16:27.596 "nvme_version": "1.4" 00:16:27.596 }, 00:16:27.596 "ns_data": { 00:16:27.596 "id": 1, 00:16:27.596 "can_share": false 00:16:27.596 } 00:16:27.596 } 00:16:27.596 ], 00:16:27.596 "mp_policy": "active_passive" 00:16:27.596 } 00:16:27.596 } 00:16:27.596 ]' 00:16:27.596 04:22:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:27.596 04:22:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:27.596 04:22:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:27.596 04:22:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:27.596 04:22:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:27.596 04:22:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:16:27.596 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:27.596 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:27.596 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:27.596 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:27.596 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:27.857 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=ef11f593-e71c-47d1-959b-31ea2465a452 00:16:27.857 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:27.857 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef11f593-e71c-47d1-959b-31ea2465a452 00:16:27.857 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:28.118 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=1fa9ce58-cfdb-4c68-9492-9304e23351e6 00:16:28.118 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1fa9ce58-cfdb-4c68-9492-9304e23351e6 00:16:28.379 04:22:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=7805a2f4-c449-4de5-88f0-34c272422f8a 00:16:28.379 04:22:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7805a2f4-c449-4de5-88f0-34c272422f8a 00:16:28.379 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:28.379 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:28.379 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=7805a2f4-c449-4de5-88f0-34c272422f8a 00:16:28.379 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:28.379 04:22:13 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 7805a2f4-c449-4de5-88f0-34c272422f8a 00:16:28.379 04:22:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=7805a2f4-c449-4de5-88f0-34c272422f8a 00:16:28.379 04:22:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:28.379 04:22:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:28.379 04:22:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:28.379 04:22:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7805a2f4-c449-4de5-88f0-34c272422f8a 00:16:28.639 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:28.639 { 00:16:28.640 "name": "7805a2f4-c449-4de5-88f0-34c272422f8a", 00:16:28.640 "aliases": [ 00:16:28.640 "lvs/nvme0n1p0" 00:16:28.640 ], 00:16:28.640 "product_name": "Logical Volume", 00:16:28.640 "block_size": 4096, 00:16:28.640 "num_blocks": 26476544, 00:16:28.640 "uuid": "7805a2f4-c449-4de5-88f0-34c272422f8a", 00:16:28.640 "assigned_rate_limits": { 00:16:28.640 "rw_ios_per_sec": 0, 00:16:28.640 "rw_mbytes_per_sec": 0, 00:16:28.640 "r_mbytes_per_sec": 0, 00:16:28.640 "w_mbytes_per_sec": 0 00:16:28.640 }, 00:16:28.640 "claimed": false, 00:16:28.640 "zoned": false, 00:16:28.640 "supported_io_types": { 00:16:28.640 "read": true, 00:16:28.640 "write": true, 00:16:28.640 "unmap": true, 00:16:28.640 "flush": false, 00:16:28.640 "reset": true, 00:16:28.640 "nvme_admin": false, 00:16:28.640 "nvme_io": false, 00:16:28.640 "nvme_io_md": false, 00:16:28.640 "write_zeroes": true, 00:16:28.640 "zcopy": false, 00:16:28.640 "get_zone_info": false, 00:16:28.640 "zone_management": false, 00:16:28.640 "zone_append": false, 00:16:28.640 "compare": false, 00:16:28.640 "compare_and_write": false, 00:16:28.640 "abort": false, 00:16:28.640 "seek_hole": true, 00:16:28.640 "seek_data": true, 00:16:28.640 "copy": false, 00:16:28.640 "nvme_iov_md": false 00:16:28.640 }, 00:16:28.640 "driver_specific": { 00:16:28.640 "lvol": { 00:16:28.640 "lvol_store_uuid": "1fa9ce58-cfdb-4c68-9492-9304e23351e6", 00:16:28.640 "base_bdev": "nvme0n1", 00:16:28.640 "thin_provision": true, 00:16:28.640 "num_allocated_clusters": 0, 00:16:28.640 "snapshot": false, 00:16:28.640 "clone": false, 00:16:28.640 "esnap_clone": false 00:16:28.640 } 00:16:28.640 } 00:16:28.640 } 00:16:28.640 ]' 00:16:28.640 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:28.640 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:28.640 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:28.640 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:28.640 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:28.640 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:28.640 04:22:14 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:28.640 04:22:14 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:28.640 04:22:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:28.899 04:22:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:28.899 04:22:14 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:28.899 04:22:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 7805a2f4-c449-4de5-88f0-34c272422f8a 00:16:28.899 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=7805a2f4-c449-4de5-88f0-34c272422f8a 00:16:28.899 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:28.899 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:28.899 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:28.899 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7805a2f4-c449-4de5-88f0-34c272422f8a 00:16:29.158 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:29.158 { 00:16:29.158 "name": "7805a2f4-c449-4de5-88f0-34c272422f8a", 00:16:29.158 "aliases": [ 00:16:29.158 "lvs/nvme0n1p0" 00:16:29.158 ], 00:16:29.158 "product_name": "Logical Volume", 00:16:29.158 "block_size": 4096, 00:16:29.158 "num_blocks": 26476544, 00:16:29.158 "uuid": "7805a2f4-c449-4de5-88f0-34c272422f8a", 00:16:29.158 "assigned_rate_limits": { 00:16:29.158 "rw_ios_per_sec": 0, 00:16:29.158 "rw_mbytes_per_sec": 0, 00:16:29.158 "r_mbytes_per_sec": 0, 00:16:29.158 "w_mbytes_per_sec": 0 00:16:29.158 }, 00:16:29.158 "claimed": false, 00:16:29.158 "zoned": false, 00:16:29.158 "supported_io_types": { 00:16:29.158 "read": true, 00:16:29.158 "write": true, 00:16:29.158 "unmap": true, 00:16:29.158 "flush": false, 00:16:29.158 "reset": true, 00:16:29.158 "nvme_admin": false, 00:16:29.158 "nvme_io": false, 00:16:29.158 "nvme_io_md": false, 00:16:29.158 "write_zeroes": true, 00:16:29.158 "zcopy": false, 00:16:29.158 "get_zone_info": false, 00:16:29.158 "zone_management": false, 00:16:29.158 "zone_append": false, 00:16:29.158 "compare": false, 00:16:29.158 "compare_and_write": false, 00:16:29.158 "abort": false, 00:16:29.158 "seek_hole": true, 00:16:29.158 "seek_data": true, 00:16:29.158 "copy": false, 00:16:29.158 "nvme_iov_md": false 00:16:29.158 }, 00:16:29.158 "driver_specific": { 00:16:29.158 "lvol": { 00:16:29.158 "lvol_store_uuid": "1fa9ce58-cfdb-4c68-9492-9304e23351e6", 00:16:29.158 "base_bdev": "nvme0n1", 00:16:29.158 "thin_provision": true, 00:16:29.158 "num_allocated_clusters": 0, 00:16:29.158 "snapshot": false, 00:16:29.158 "clone": false, 00:16:29.158 "esnap_clone": false 00:16:29.158 } 00:16:29.158 } 00:16:29.158 } 00:16:29.158 ]' 00:16:29.158 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:29.158 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:29.158 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:29.158 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:29.158 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:29.158 04:22:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:29.158 04:22:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:29.158 04:22:14 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:29.416 04:22:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:29.416 04:22:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 7805a2f4-c449-4de5-88f0-34c272422f8a 00:16:29.416 04:22:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=7805a2f4-c449-4de5-88f0-34c272422f8a 00:16:29.416 04:22:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:29.416 04:22:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:29.416 04:22:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:29.416 04:22:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7805a2f4-c449-4de5-88f0-34c272422f8a 00:16:29.675 04:22:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:29.675 { 00:16:29.675 "name": "7805a2f4-c449-4de5-88f0-34c272422f8a", 00:16:29.675 "aliases": [ 00:16:29.675 "lvs/nvme0n1p0" 00:16:29.675 ], 00:16:29.675 "product_name": "Logical Volume", 00:16:29.675 "block_size": 4096, 00:16:29.675 "num_blocks": 26476544, 00:16:29.675 "uuid": "7805a2f4-c449-4de5-88f0-34c272422f8a", 00:16:29.675 "assigned_rate_limits": { 00:16:29.675 "rw_ios_per_sec": 0, 00:16:29.675 "rw_mbytes_per_sec": 0, 00:16:29.675 "r_mbytes_per_sec": 0, 00:16:29.675 "w_mbytes_per_sec": 0 00:16:29.675 }, 00:16:29.675 "claimed": false, 00:16:29.675 "zoned": false, 00:16:29.675 "supported_io_types": { 00:16:29.675 "read": true, 00:16:29.675 "write": true, 00:16:29.675 "unmap": true, 00:16:29.675 "flush": false, 00:16:29.675 "reset": true, 00:16:29.675 "nvme_admin": false, 00:16:29.675 "nvme_io": false, 00:16:29.675 "nvme_io_md": false, 00:16:29.675 "write_zeroes": true, 00:16:29.675 "zcopy": false, 00:16:29.675 "get_zone_info": false, 00:16:29.675 "zone_management": false, 00:16:29.675 "zone_append": false, 00:16:29.675 "compare": false, 00:16:29.675 "compare_and_write": false, 00:16:29.675 "abort": false, 00:16:29.675 "seek_hole": true, 00:16:29.675 "seek_data": true, 00:16:29.675 "copy": false, 00:16:29.675 "nvme_iov_md": false 00:16:29.675 }, 00:16:29.675 "driver_specific": { 00:16:29.675 "lvol": { 00:16:29.675 "lvol_store_uuid": "1fa9ce58-cfdb-4c68-9492-9304e23351e6", 00:16:29.675 "base_bdev": "nvme0n1", 00:16:29.675 "thin_provision": true, 00:16:29.675 "num_allocated_clusters": 0, 00:16:29.675 "snapshot": false, 00:16:29.675 "clone": false, 00:16:29.675 "esnap_clone": false 00:16:29.675 } 00:16:29.675 } 00:16:29.675 } 00:16:29.675 ]' 00:16:29.675 04:22:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:29.675 04:22:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:29.675 04:22:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:29.675 04:22:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:29.675 04:22:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:29.675 04:22:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:29.675 04:22:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:29.675 04:22:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7805a2f4-c449-4de5-88f0-34c272422f8a -c nvc0n1p0 --l2p_dram_limit 20 00:16:29.934 [2024-11-17 04:22:15.456300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.934 [2024-11-17 04:22:15.456340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:29.934 [2024-11-17 04:22:15.456354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:29.935 [2024-11-17 04:22:15.456361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.935 [2024-11-17 04:22:15.456413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.935 [2024-11-17 04:22:15.456422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:29.935 [2024-11-17 04:22:15.456432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:29.935 [2024-11-17 04:22:15.456438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.935 [2024-11-17 04:22:15.456454] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:29.935 [2024-11-17 04:22:15.456663] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:29.935 [2024-11-17 04:22:15.456681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.935 [2024-11-17 04:22:15.456687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:29.935 [2024-11-17 04:22:15.456697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:16:29.935 [2024-11-17 04:22:15.456703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.935 [2024-11-17 04:22:15.456725] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b869e45c-5422-4865-92f9-26befc46bb7c 00:16:29.935 [2024-11-17 04:22:15.457696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.935 [2024-11-17 04:22:15.457722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:29.935 [2024-11-17 04:22:15.457730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:29.935 [2024-11-17 04:22:15.457738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.935 [2024-11-17 04:22:15.462708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.935 [2024-11-17 04:22:15.462749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:29.935 [2024-11-17 04:22:15.462759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.687 ms 00:16:29.935 [2024-11-17 04:22:15.462769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.935 [2024-11-17 04:22:15.462843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.935 [2024-11-17 04:22:15.462852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:29.935 [2024-11-17 04:22:15.462860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:29.935 [2024-11-17 04:22:15.462868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.935 [2024-11-17 04:22:15.462904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.935 [2024-11-17 04:22:15.462913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:29.935 [2024-11-17 04:22:15.462920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:29.935 [2024-11-17 04:22:15.462933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.935 [2024-11-17 04:22:15.462950] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:29.935 [2024-11-17 04:22:15.464194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.935 [2024-11-17 04:22:15.464229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:29.935 [2024-11-17 04:22:15.464239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:16:29.935 [2024-11-17 04:22:15.464246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.935 [2024-11-17 04:22:15.464271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.935 [2024-11-17 04:22:15.464277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:29.935 [2024-11-17 04:22:15.464286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:29.935 [2024-11-17 04:22:15.464292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.935 [2024-11-17 04:22:15.464304] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:29.935 [2024-11-17 04:22:15.464432] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:29.935 [2024-11-17 04:22:15.464443] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:29.935 [2024-11-17 04:22:15.464452] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:29.935 [2024-11-17 04:22:15.464461] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:29.935 [2024-11-17 04:22:15.464468] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:29.935 [2024-11-17 04:22:15.464476] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:29.935 [2024-11-17 04:22:15.464482] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:29.935 [2024-11-17 04:22:15.464489] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:29.935 [2024-11-17 04:22:15.464495] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:29.935 [2024-11-17 04:22:15.464504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.935 [2024-11-17 04:22:15.464510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:29.935 [2024-11-17 04:22:15.464518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:16:29.935 [2024-11-17 04:22:15.464524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.935 [2024-11-17 04:22:15.464587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.935 [2024-11-17 04:22:15.464599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:29.935 [2024-11-17 04:22:15.464609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:29.935 [2024-11-17 04:22:15.464615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.935 [2024-11-17 04:22:15.464685] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:29.935 [2024-11-17 04:22:15.464694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:29.935 [2024-11-17 04:22:15.464702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:29.935 [2024-11-17 04:22:15.464712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.935 [2024-11-17 04:22:15.464719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:29.935 [2024-11-17 04:22:15.464724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:29.935 [2024-11-17 04:22:15.464730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:29.935 [2024-11-17 04:22:15.464735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:29.935 [2024-11-17 04:22:15.464743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:29.935 [2024-11-17 04:22:15.464748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:29.935 [2024-11-17 04:22:15.464754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:29.935 [2024-11-17 04:22:15.464760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:29.935 [2024-11-17 04:22:15.464767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:29.935 [2024-11-17 04:22:15.464772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:29.935 [2024-11-17 04:22:15.464779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:29.935 [2024-11-17 04:22:15.464784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.935 [2024-11-17 04:22:15.464792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:29.935 [2024-11-17 04:22:15.464797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:29.935 [2024-11-17 04:22:15.464803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.935 [2024-11-17 04:22:15.464809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:29.935 [2024-11-17 04:22:15.464815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:29.935 [2024-11-17 04:22:15.464820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.935 [2024-11-17 04:22:15.464827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:29.935 [2024-11-17 04:22:15.464832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:29.935 [2024-11-17 04:22:15.464839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.935 [2024-11-17 04:22:15.464844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:29.935 [2024-11-17 04:22:15.464851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:29.935 [2024-11-17 04:22:15.464857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.935 [2024-11-17 04:22:15.464866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:29.935 [2024-11-17 04:22:15.464872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:29.935 [2024-11-17 04:22:15.464879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.935 [2024-11-17 04:22:15.464884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:29.935 [2024-11-17 04:22:15.464892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:29.935 [2024-11-17 04:22:15.464898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:29.935 [2024-11-17 04:22:15.464906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:29.935 [2024-11-17 04:22:15.464912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:29.935 [2024-11-17 04:22:15.464919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:29.935 [2024-11-17 04:22:15.464925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:29.935 [2024-11-17 04:22:15.464932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:29.935 [2024-11-17 04:22:15.464938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.935 [2024-11-17 04:22:15.464945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:29.935 [2024-11-17 04:22:15.464951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:29.935 [2024-11-17 04:22:15.464958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.935 [2024-11-17 04:22:15.464964] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:29.935 [2024-11-17 04:22:15.464973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:29.935 [2024-11-17 04:22:15.464982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:29.935 [2024-11-17 04:22:15.464989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.935 [2024-11-17 04:22:15.464995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:29.936 [2024-11-17 04:22:15.465003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:29.936 [2024-11-17 04:22:15.465009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:29.936 [2024-11-17 04:22:15.465017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:29.936 [2024-11-17 04:22:15.465022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:29.936 [2024-11-17 04:22:15.465030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:29.936 [2024-11-17 04:22:15.465039] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:29.936 [2024-11-17 04:22:15.465048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:29.936 [2024-11-17 04:22:15.465055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:29.936 [2024-11-17 04:22:15.465063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:29.936 [2024-11-17 04:22:15.465069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:29.936 [2024-11-17 04:22:15.465077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:29.936 [2024-11-17 04:22:15.465083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:29.936 [2024-11-17 04:22:15.465093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:29.936 [2024-11-17 04:22:15.465099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:29.936 [2024-11-17 04:22:15.465106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:29.936 [2024-11-17 04:22:15.465112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:29.936 [2024-11-17 04:22:15.465120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:29.936 [2024-11-17 04:22:15.465126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:29.936 [2024-11-17 04:22:15.465134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:29.936 [2024-11-17 04:22:15.465140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:29.936 [2024-11-17 04:22:15.465148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:29.936 [2024-11-17 04:22:15.465154] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:29.936 [2024-11-17 04:22:15.465162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:29.936 [2024-11-17 04:22:15.465171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:29.936 [2024-11-17 04:22:15.465178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:29.936 [2024-11-17 04:22:15.465185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:29.936 [2024-11-17 04:22:15.465193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:29.936 [2024-11-17 04:22:15.465199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.936 [2024-11-17 04:22:15.465209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:29.936 [2024-11-17 04:22:15.465216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:16:29.936 [2024-11-17 04:22:15.465223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.936 [2024-11-17 04:22:15.465253] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:29.936 [2024-11-17 04:22:15.465268] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:33.232 [2024-11-17 04:22:18.899678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.232 [2024-11-17 04:22:18.899762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:33.232 [2024-11-17 04:22:18.899780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3434.406 ms 00:16:33.232 [2024-11-17 04:22:18.899795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.232 [2024-11-17 04:22:18.913640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.232 [2024-11-17 04:22:18.913701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:33.232 [2024-11-17 04:22:18.913717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.731 ms 00:16:33.232 [2024-11-17 04:22:18.913739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.232 [2024-11-17 04:22:18.913863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.232 [2024-11-17 04:22:18.913878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:33.232 [2024-11-17 04:22:18.913888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:33.232 [2024-11-17 04:22:18.913901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.232 [2024-11-17 04:22:18.937045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.232 [2024-11-17 04:22:18.937115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:33.232 [2024-11-17 04:22:18.937130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.104 ms 00:16:33.232 [2024-11-17 04:22:18.937142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.232 [2024-11-17 04:22:18.937186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.232 [2024-11-17 04:22:18.937200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:33.232 [2024-11-17 04:22:18.937213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:33.232 [2024-11-17 04:22:18.937225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.232 [2024-11-17 04:22:18.937840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.232 [2024-11-17 04:22:18.937895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:33.232 [2024-11-17 04:22:18.937916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:16:33.232 [2024-11-17 04:22:18.937934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.232 [2024-11-17 04:22:18.938073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.232 [2024-11-17 04:22:18.938087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:33.232 [2024-11-17 04:22:18.938099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:16:33.232 [2024-11-17 04:22:18.938113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.232 [2024-11-17 04:22:18.946334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.232 [2024-11-17 04:22:18.946413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:33.232 [2024-11-17 04:22:18.946425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.200 ms 00:16:33.232 [2024-11-17 04:22:18.946436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.232 [2024-11-17 04:22:18.957007] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:33.494 [2024-11-17 04:22:18.964918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.494 [2024-11-17 04:22:18.964959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:33.494 [2024-11-17 04:22:18.964974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.408 ms 00:16:33.494 [2024-11-17 04:22:18.964982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.494 [2024-11-17 04:22:19.049399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.494 [2024-11-17 04:22:19.049464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:33.494 [2024-11-17 04:22:19.049484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.358 ms 00:16:33.494 [2024-11-17 04:22:19.049493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.494 [2024-11-17 04:22:19.049681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.494 [2024-11-17 04:22:19.049692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:33.494 [2024-11-17 04:22:19.049703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:16:33.494 [2024-11-17 04:22:19.049711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.494 [2024-11-17 04:22:19.055354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.494 [2024-11-17 04:22:19.055421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:33.494 [2024-11-17 04:22:19.055436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.602 ms 00:16:33.494 [2024-11-17 04:22:19.055445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.494 [2024-11-17 04:22:19.060193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.494 [2024-11-17 04:22:19.060273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:33.494 [2024-11-17 04:22:19.060288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.718 ms 00:16:33.494 [2024-11-17 04:22:19.060296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.494 [2024-11-17 04:22:19.060667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.494 [2024-11-17 04:22:19.060679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:33.494 [2024-11-17 04:22:19.060693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:16:33.494 [2024-11-17 04:22:19.060701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.494 [2024-11-17 04:22:19.102158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.494 [2024-11-17 04:22:19.102271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:33.494 [2024-11-17 04:22:19.102305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.419 ms 00:16:33.494 [2024-11-17 04:22:19.102344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.494 [2024-11-17 04:22:19.111966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.494 [2024-11-17 04:22:19.112055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:33.494 [2024-11-17 04:22:19.112084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.451 ms 00:16:33.494 [2024-11-17 04:22:19.112102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.494 [2024-11-17 04:22:19.119111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.494 [2024-11-17 04:22:19.119170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:33.494 [2024-11-17 04:22:19.119183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.928 ms 00:16:33.494 [2024-11-17 04:22:19.119190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.494 [2024-11-17 04:22:19.126067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.494 [2024-11-17 04:22:19.126121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:33.494 [2024-11-17 04:22:19.126138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.820 ms 00:16:33.494 [2024-11-17 04:22:19.126146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.494 [2024-11-17 04:22:19.126205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.494 [2024-11-17 04:22:19.126214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:33.494 [2024-11-17 04:22:19.126231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:33.494 [2024-11-17 04:22:19.126239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.494 [2024-11-17 04:22:19.126336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.494 [2024-11-17 04:22:19.126347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:33.494 [2024-11-17 04:22:19.126364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:33.494 [2024-11-17 04:22:19.126391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.494 [2024-11-17 04:22:19.127549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3670.702 ms, result 0 00:16:33.494 { 00:16:33.494 "name": "ftl0", 00:16:33.494 "uuid": "b869e45c-5422-4865-92f9-26befc46bb7c" 00:16:33.494 } 00:16:33.494 04:22:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:33.494 04:22:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:33.494 04:22:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:33.755 04:22:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:33.755 [2024-11-17 04:22:19.471110] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:33.755 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:33.755 Zero copy mechanism will not be used. 00:16:33.755 Running I/O for 4 seconds... 00:16:36.085 1081.00 IOPS, 71.79 MiB/s [2024-11-17T04:22:22.756Z] 997.50 IOPS, 66.24 MiB/s [2024-11-17T04:22:23.709Z] 988.00 IOPS, 65.61 MiB/s [2024-11-17T04:22:23.709Z] 1058.25 IOPS, 70.27 MiB/s 00:16:37.982 Latency(us) 00:16:37.982 [2024-11-17T04:22:23.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.982 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:37.982 ftl0 : 4.00 1058.14 70.27 0.00 0.00 995.77 171.72 4864.79 00:16:37.982 [2024-11-17T04:22:23.709Z] =================================================================================================================== 00:16:37.982 [2024-11-17T04:22:23.709Z] Total : 1058.14 70.27 0.00 0.00 995.77 171.72 4864.79 00:16:37.982 [2024-11-17 04:22:23.479206] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:37.982 { 00:16:37.982 "results": [ 00:16:37.982 { 00:16:37.982 "job": "ftl0", 00:16:37.982 "core_mask": "0x1", 00:16:37.982 "workload": "randwrite", 00:16:37.982 "status": "finished", 00:16:37.982 "queue_depth": 1, 00:16:37.982 "io_size": 69632, 00:16:37.982 "runtime": 4.001368, 00:16:37.982 "iops": 1058.1381167640666, 00:16:37.982 "mibps": 70.26698431636379, 00:16:37.982 "io_failed": 0, 00:16:37.982 "io_timeout": 0, 00:16:37.982 "avg_latency_us": 995.7706071727043, 00:16:37.982 "min_latency_us": 171.71692307692308, 00:16:37.982 "max_latency_us": 4864.787692307692 00:16:37.982 } 00:16:37.982 ], 00:16:37.982 "core_count": 1 00:16:37.982 } 00:16:37.982 04:22:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:37.982 [2024-11-17 04:22:23.587697] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:37.982 Running I/O for 4 seconds... 00:16:40.307 8384.00 IOPS, 32.75 MiB/s [2024-11-17T04:22:26.608Z] 6975.50 IOPS, 27.25 MiB/s [2024-11-17T04:22:27.995Z] 6551.33 IOPS, 25.59 MiB/s [2024-11-17T04:22:27.995Z] 6223.75 IOPS, 24.31 MiB/s 00:16:42.268 Latency(us) 00:16:42.268 [2024-11-17T04:22:27.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.268 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.268 ftl0 : 4.04 6195.06 24.20 0.00 0.00 20563.80 206.38 50009.01 00:16:42.268 [2024-11-17T04:22:27.995Z] =================================================================================================================== 00:16:42.268 [2024-11-17T04:22:27.995Z] Total : 6195.06 24.20 0.00 0.00 20563.80 0.00 50009.01 00:16:42.268 [2024-11-17 04:22:27.631937] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:42.268 { 00:16:42.268 "results": [ 00:16:42.268 { 00:16:42.268 "job": "ftl0", 00:16:42.268 "core_mask": "0x1", 00:16:42.268 "workload": "randwrite", 00:16:42.268 "status": "finished", 00:16:42.268 "queue_depth": 128, 00:16:42.268 "io_size": 4096, 00:16:42.268 "runtime": 4.038219, 00:16:42.268 "iops": 6195.057771755321, 00:16:42.268 "mibps": 24.19944442091922, 00:16:42.268 "io_failed": 0, 00:16:42.268 "io_timeout": 0, 00:16:42.268 "avg_latency_us": 20563.795763742193, 00:16:42.268 "min_latency_us": 206.3753846153846, 00:16:42.268 "max_latency_us": 50009.00923076923 00:16:42.268 } 00:16:42.268 ], 00:16:42.268 "core_count": 1 00:16:42.268 } 00:16:42.268 04:22:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:42.268 [2024-11-17 04:22:27.736778] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:42.268 Running I/O for 4 seconds... 00:16:44.153 6086.00 IOPS, 23.77 MiB/s [2024-11-17T04:22:30.825Z] 5969.00 IOPS, 23.32 MiB/s [2024-11-17T04:22:31.769Z] 5904.67 IOPS, 23.07 MiB/s [2024-11-17T04:22:31.769Z] 5787.00 IOPS, 22.61 MiB/s 00:16:46.042 Latency(us) 00:16:46.042 [2024-11-17T04:22:31.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.042 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:46.042 Verification LBA range: start 0x0 length 0x1400000 00:16:46.042 ftl0 : 4.01 5797.50 22.65 0.00 0.00 22016.53 274.12 96791.63 00:16:46.042 [2024-11-17T04:22:31.769Z] =================================================================================================================== 00:16:46.042 [2024-11-17T04:22:31.769Z] Total : 5797.50 22.65 0.00 0.00 22016.53 0.00 96791.63 00:16:46.043 [2024-11-17 04:22:31.758961] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:46.043 { 00:16:46.043 "results": [ 00:16:46.043 { 00:16:46.043 "job": "ftl0", 00:16:46.043 "core_mask": "0x1", 00:16:46.043 "workload": "verify", 00:16:46.043 "status": "finished", 00:16:46.043 "verify_range": { 00:16:46.043 "start": 0, 00:16:46.043 "length": 20971520 00:16:46.043 }, 00:16:46.043 "queue_depth": 128, 00:16:46.043 "io_size": 4096, 00:16:46.043 "runtime": 4.014663, 00:16:46.043 "iops": 5797.497822357693, 00:16:46.043 "mibps": 22.646475868584737, 00:16:46.043 "io_failed": 0, 00:16:46.043 "io_timeout": 0, 00:16:46.043 "avg_latency_us": 22016.534953912254, 00:16:46.043 "min_latency_us": 274.11692307692306, 00:16:46.043 "max_latency_us": 96791.63076923077 00:16:46.043 } 00:16:46.043 ], 00:16:46.043 "core_count": 1 00:16:46.043 } 00:16:46.304 04:22:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:46.304 [2024-11-17 04:22:31.983318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.304 [2024-11-17 04:22:31.983396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:46.304 [2024-11-17 04:22:31.983413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:46.304 [2024-11-17 04:22:31.983425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.304 [2024-11-17 04:22:31.983450] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:46.304 [2024-11-17 04:22:31.984159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.304 [2024-11-17 04:22:31.984214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:46.304 [2024-11-17 04:22:31.984241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:16:46.304 [2024-11-17 04:22:31.984253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.304 [2024-11-17 04:22:31.987473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.304 [2024-11-17 04:22:31.987518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:46.304 [2024-11-17 04:22:31.987529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.193 ms 00:16:46.304 [2024-11-17 04:22:31.987563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.567 [2024-11-17 04:22:32.203080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.567 [2024-11-17 04:22:32.203145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:46.567 [2024-11-17 04:22:32.203169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 215.492 ms 00:16:46.567 [2024-11-17 04:22:32.203183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.567 [2024-11-17 04:22:32.209408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.567 [2024-11-17 04:22:32.209458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:46.567 [2024-11-17 04:22:32.209470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.191 ms 00:16:46.567 [2024-11-17 04:22:32.209480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.567 [2024-11-17 04:22:32.212501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.567 [2024-11-17 04:22:32.212568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:46.567 [2024-11-17 04:22:32.212579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.947 ms 00:16:46.567 [2024-11-17 04:22:32.212589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.567 [2024-11-17 04:22:32.218634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.567 [2024-11-17 04:22:32.218700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:46.567 [2024-11-17 04:22:32.218711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.998 ms 00:16:46.567 [2024-11-17 04:22:32.218724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.567 [2024-11-17 04:22:32.218850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.567 [2024-11-17 04:22:32.218869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:46.567 [2024-11-17 04:22:32.218878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:16:46.567 [2024-11-17 04:22:32.218889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.567 [2024-11-17 04:22:32.222000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.567 [2024-11-17 04:22:32.222063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:46.567 [2024-11-17 04:22:32.222074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.093 ms 00:16:46.567 [2024-11-17 04:22:32.222083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.567 [2024-11-17 04:22:32.224842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.567 [2024-11-17 04:22:32.224898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:46.567 [2024-11-17 04:22:32.224908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.712 ms 00:16:46.567 [2024-11-17 04:22:32.224917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.567 [2024-11-17 04:22:32.227281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.567 [2024-11-17 04:22:32.227339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:46.567 [2024-11-17 04:22:32.227348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.321 ms 00:16:46.567 [2024-11-17 04:22:32.227361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.567 [2024-11-17 04:22:32.229698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.567 [2024-11-17 04:22:32.229753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:46.567 [2024-11-17 04:22:32.229762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.249 ms 00:16:46.567 [2024-11-17 04:22:32.229772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.567 [2024-11-17 04:22:32.229811] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:46.567 [2024-11-17 04:22:32.229829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.229999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:46.567 [2024-11-17 04:22:32.230149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:46.568 [2024-11-17 04:22:32.230816] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:46.568 [2024-11-17 04:22:32.230828] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b869e45c-5422-4865-92f9-26befc46bb7c 00:16:46.568 [2024-11-17 04:22:32.230839] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:46.568 [2024-11-17 04:22:32.230847] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:46.568 [2024-11-17 04:22:32.230855] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:46.568 [2024-11-17 04:22:32.230864] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:46.568 [2024-11-17 04:22:32.230875] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:46.568 [2024-11-17 04:22:32.230885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:46.568 [2024-11-17 04:22:32.230894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:46.568 [2024-11-17 04:22:32.230901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:46.568 [2024-11-17 04:22:32.230908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:46.568 [2024-11-17 04:22:32.230915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.568 [2024-11-17 04:22:32.230934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:46.568 [2024-11-17 04:22:32.230946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:16:46.568 [2024-11-17 04:22:32.230956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.568 [2024-11-17 04:22:32.233301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.568 [2024-11-17 04:22:32.233350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:46.568 [2024-11-17 04:22:32.233361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.291 ms 00:16:46.568 [2024-11-17 04:22:32.233390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.568 [2024-11-17 04:22:32.233502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.568 [2024-11-17 04:22:32.233516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:46.568 [2024-11-17 04:22:32.233530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:16:46.568 [2024-11-17 04:22:32.233543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.568 [2024-11-17 04:22:32.241517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.568 [2024-11-17 04:22:32.241576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:46.568 [2024-11-17 04:22:32.241587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.568 [2024-11-17 04:22:32.241601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.568 [2024-11-17 04:22:32.241662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.568 [2024-11-17 04:22:32.241673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:46.569 [2024-11-17 04:22:32.241685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.569 [2024-11-17 04:22:32.241694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.569 [2024-11-17 04:22:32.241765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.569 [2024-11-17 04:22:32.241778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:46.569 [2024-11-17 04:22:32.241787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.569 [2024-11-17 04:22:32.241797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.569 [2024-11-17 04:22:32.241817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.569 [2024-11-17 04:22:32.241827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:46.569 [2024-11-17 04:22:32.241835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.569 [2024-11-17 04:22:32.241849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.569 [2024-11-17 04:22:32.255060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.569 [2024-11-17 04:22:32.255118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:46.569 [2024-11-17 04:22:32.255129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.569 [2024-11-17 04:22:32.255139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.569 [2024-11-17 04:22:32.265745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.569 [2024-11-17 04:22:32.265806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:46.569 [2024-11-17 04:22:32.265817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.569 [2024-11-17 04:22:32.265830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.569 [2024-11-17 04:22:32.265896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.569 [2024-11-17 04:22:32.265908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:46.569 [2024-11-17 04:22:32.265917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.569 [2024-11-17 04:22:32.265928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.569 [2024-11-17 04:22:32.265972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.569 [2024-11-17 04:22:32.265983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:46.569 [2024-11-17 04:22:32.265992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.569 [2024-11-17 04:22:32.266005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.569 [2024-11-17 04:22:32.266074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.569 [2024-11-17 04:22:32.266087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:46.569 [2024-11-17 04:22:32.266095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.569 [2024-11-17 04:22:32.266106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.569 [2024-11-17 04:22:32.266140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.569 [2024-11-17 04:22:32.266151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:46.569 [2024-11-17 04:22:32.266161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.569 [2024-11-17 04:22:32.266171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.569 [2024-11-17 04:22:32.266212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.569 [2024-11-17 04:22:32.266225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:46.569 [2024-11-17 04:22:32.266234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.569 [2024-11-17 04:22:32.266244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.569 [2024-11-17 04:22:32.266287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.569 [2024-11-17 04:22:32.266307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:46.569 [2024-11-17 04:22:32.266316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.569 [2024-11-17 04:22:32.266328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.569 [2024-11-17 04:22:32.266487] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 283.134 ms, result 0 00:16:46.569 true 00:16:46.830 04:22:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 84590 00:16:46.830 04:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 84590 ']' 00:16:46.830 04:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 84590 00:16:46.830 04:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:16:46.830 04:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.830 04:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84590 00:16:46.830 killing process with pid 84590 00:16:46.830 Received shutdown signal, test time was about 4.000000 seconds 00:16:46.830 00:16:46.830 Latency(us) 00:16:46.830 [2024-11-17T04:22:32.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.830 [2024-11-17T04:22:32.557Z] =================================================================================================================== 00:16:46.830 [2024-11-17T04:22:32.557Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:46.830 04:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.830 04:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.830 04:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84590' 00:16:46.830 04:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 84590 00:16:46.830 04:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 84590 00:16:47.091 Remove shared memory files 00:16:47.091 04:22:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:47.091 04:22:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:47.091 04:22:32 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:47.091 04:22:32 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:47.091 04:22:32 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:47.091 04:22:32 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:47.091 04:22:32 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:47.091 04:22:32 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:47.091 00:16:47.091 real 0m21.020s 00:16:47.091 user 0m23.534s 00:16:47.091 sys 0m0.966s 00:16:47.091 04:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.091 ************************************ 00:16:47.091 END TEST ftl_bdevperf 00:16:47.091 ************************************ 00:16:47.091 04:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:47.091 04:22:32 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:47.091 04:22:32 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:47.091 04:22:32 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.091 04:22:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:47.091 ************************************ 00:16:47.091 START TEST ftl_trim 00:16:47.091 ************************************ 00:16:47.091 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:47.091 * Looking for test storage... 00:16:47.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:47.091 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:47.091 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:16:47.091 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:47.091 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.091 04:22:32 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:47.352 04:22:32 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.352 04:22:32 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.352 04:22:32 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.352 04:22:32 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:47.352 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.352 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:47.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.352 --rc genhtml_branch_coverage=1 00:16:47.352 --rc genhtml_function_coverage=1 00:16:47.352 --rc genhtml_legend=1 00:16:47.352 --rc geninfo_all_blocks=1 00:16:47.352 --rc geninfo_unexecuted_blocks=1 00:16:47.352 00:16:47.352 ' 00:16:47.352 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:47.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.352 --rc genhtml_branch_coverage=1 00:16:47.352 --rc genhtml_function_coverage=1 00:16:47.352 --rc genhtml_legend=1 00:16:47.352 --rc geninfo_all_blocks=1 00:16:47.352 --rc geninfo_unexecuted_blocks=1 00:16:47.352 00:16:47.352 ' 00:16:47.352 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:47.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.352 --rc genhtml_branch_coverage=1 00:16:47.352 --rc genhtml_function_coverage=1 00:16:47.352 --rc genhtml_legend=1 00:16:47.352 --rc geninfo_all_blocks=1 00:16:47.352 --rc geninfo_unexecuted_blocks=1 00:16:47.352 00:16:47.352 ' 00:16:47.352 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:47.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.352 --rc genhtml_branch_coverage=1 00:16:47.352 --rc genhtml_function_coverage=1 00:16:47.352 --rc genhtml_legend=1 00:16:47.352 --rc geninfo_all_blocks=1 00:16:47.352 --rc geninfo_unexecuted_blocks=1 00:16:47.352 00:16:47.352 ' 00:16:47.352 04:22:32 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:47.352 04:22:32 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=84931 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 84931 00:16:47.353 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 84931 ']' 00:16:47.353 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.353 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.353 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.353 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.353 04:22:32 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:47.353 04:22:32 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:47.353 [2024-11-17 04:22:32.921581] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:16:47.353 [2024-11-17 04:22:32.921711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84931 ] 00:16:47.614 [2024-11-17 04:22:33.083508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:47.614 [2024-11-17 04:22:33.105871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.614 [2024-11-17 04:22:33.106342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.614 [2024-11-17 04:22:33.106410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.188 04:22:33 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.188 04:22:33 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:16:48.188 04:22:33 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:48.188 04:22:33 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:48.188 04:22:33 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:48.188 04:22:33 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:48.188 04:22:33 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:48.188 04:22:33 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:48.450 04:22:34 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:48.450 04:22:34 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:48.450 04:22:34 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:48.450 04:22:34 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:48.450 04:22:34 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:48.450 04:22:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:48.450 04:22:34 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:48.450 04:22:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:48.711 04:22:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:48.711 { 00:16:48.711 "name": "nvme0n1", 00:16:48.711 "aliases": [ 00:16:48.711 "13392a40-b1f4-4aa1-b49d-5ff101d3fc68" 00:16:48.711 ], 00:16:48.711 "product_name": "NVMe disk", 00:16:48.711 "block_size": 4096, 00:16:48.711 "num_blocks": 1310720, 00:16:48.711 "uuid": "13392a40-b1f4-4aa1-b49d-5ff101d3fc68", 00:16:48.711 "numa_id": -1, 00:16:48.711 "assigned_rate_limits": { 00:16:48.711 "rw_ios_per_sec": 0, 00:16:48.711 "rw_mbytes_per_sec": 0, 00:16:48.711 "r_mbytes_per_sec": 0, 00:16:48.711 "w_mbytes_per_sec": 0 00:16:48.711 }, 00:16:48.711 "claimed": true, 00:16:48.711 "claim_type": "read_many_write_one", 00:16:48.711 "zoned": false, 00:16:48.711 "supported_io_types": { 00:16:48.711 "read": true, 00:16:48.711 "write": true, 00:16:48.712 "unmap": true, 00:16:48.712 "flush": true, 00:16:48.712 "reset": true, 00:16:48.712 "nvme_admin": true, 00:16:48.712 "nvme_io": true, 00:16:48.712 "nvme_io_md": false, 00:16:48.712 "write_zeroes": true, 00:16:48.712 "zcopy": false, 00:16:48.712 "get_zone_info": false, 00:16:48.712 "zone_management": false, 00:16:48.712 "zone_append": false, 00:16:48.712 "compare": true, 00:16:48.712 "compare_and_write": false, 00:16:48.712 "abort": true, 00:16:48.712 "seek_hole": false, 00:16:48.712 "seek_data": false, 00:16:48.712 "copy": true, 00:16:48.712 "nvme_iov_md": false 00:16:48.712 }, 00:16:48.712 "driver_specific": { 00:16:48.712 "nvme": [ 00:16:48.712 { 00:16:48.712 "pci_address": "0000:00:11.0", 00:16:48.712 "trid": { 00:16:48.712 "trtype": "PCIe", 00:16:48.712 "traddr": "0000:00:11.0" 00:16:48.712 }, 00:16:48.712 "ctrlr_data": { 00:16:48.712 "cntlid": 0, 00:16:48.712 "vendor_id": "0x1b36", 00:16:48.712 "model_number": "QEMU NVMe Ctrl", 00:16:48.712 "serial_number": "12341", 00:16:48.712 "firmware_revision": "8.0.0", 00:16:48.712 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:48.712 "oacs": { 00:16:48.712 "security": 0, 00:16:48.712 "format": 1, 00:16:48.712 "firmware": 0, 00:16:48.712 "ns_manage": 1 00:16:48.712 }, 00:16:48.712 "multi_ctrlr": false, 00:16:48.712 "ana_reporting": false 00:16:48.712 }, 00:16:48.712 "vs": { 00:16:48.712 "nvme_version": "1.4" 00:16:48.712 }, 00:16:48.712 "ns_data": { 00:16:48.712 "id": 1, 00:16:48.712 "can_share": false 00:16:48.712 } 00:16:48.712 } 00:16:48.712 ], 00:16:48.712 "mp_policy": "active_passive" 00:16:48.712 } 00:16:48.712 } 00:16:48.712 ]' 00:16:48.712 04:22:34 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:48.712 04:22:34 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:48.712 04:22:34 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:48.712 04:22:34 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:48.712 04:22:34 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:48.712 04:22:34 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:16:48.712 04:22:34 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:48.712 04:22:34 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:48.712 04:22:34 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:48.712 04:22:34 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:48.712 04:22:34 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:48.973 04:22:34 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=1fa9ce58-cfdb-4c68-9492-9304e23351e6 00:16:48.973 04:22:34 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:48.973 04:22:34 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1fa9ce58-cfdb-4c68-9492-9304e23351e6 00:16:48.973 04:22:34 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:49.234 04:22:34 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=35847265-ae0a-489b-bac3-c3d989fc0818 00:16:49.234 04:22:34 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 35847265-ae0a-489b-bac3-c3d989fc0818 00:16:49.494 04:22:35 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=42ddcd5d-13db-48d2-9b10-5b8e04990025 00:16:49.494 04:22:35 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 42ddcd5d-13db-48d2-9b10-5b8e04990025 00:16:49.494 04:22:35 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:49.494 04:22:35 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:49.494 04:22:35 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=42ddcd5d-13db-48d2-9b10-5b8e04990025 00:16:49.494 04:22:35 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:49.494 04:22:35 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 42ddcd5d-13db-48d2-9b10-5b8e04990025 00:16:49.494 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=42ddcd5d-13db-48d2-9b10-5b8e04990025 00:16:49.494 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:49.494 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:49.494 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:49.494 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 42ddcd5d-13db-48d2-9b10-5b8e04990025 00:16:49.756 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:49.756 { 00:16:49.756 "name": "42ddcd5d-13db-48d2-9b10-5b8e04990025", 00:16:49.756 "aliases": [ 00:16:49.756 "lvs/nvme0n1p0" 00:16:49.756 ], 00:16:49.756 "product_name": "Logical Volume", 00:16:49.756 "block_size": 4096, 00:16:49.756 "num_blocks": 26476544, 00:16:49.756 "uuid": "42ddcd5d-13db-48d2-9b10-5b8e04990025", 00:16:49.756 "assigned_rate_limits": { 00:16:49.756 "rw_ios_per_sec": 0, 00:16:49.756 "rw_mbytes_per_sec": 0, 00:16:49.756 "r_mbytes_per_sec": 0, 00:16:49.756 "w_mbytes_per_sec": 0 00:16:49.756 }, 00:16:49.756 "claimed": false, 00:16:49.756 "zoned": false, 00:16:49.756 "supported_io_types": { 00:16:49.756 "read": true, 00:16:49.756 "write": true, 00:16:49.756 "unmap": true, 00:16:49.756 "flush": false, 00:16:49.756 "reset": true, 00:16:49.756 "nvme_admin": false, 00:16:49.756 "nvme_io": false, 00:16:49.756 "nvme_io_md": false, 00:16:49.756 "write_zeroes": true, 00:16:49.756 "zcopy": false, 00:16:49.756 "get_zone_info": false, 00:16:49.756 "zone_management": false, 00:16:49.756 "zone_append": false, 00:16:49.756 "compare": false, 00:16:49.756 "compare_and_write": false, 00:16:49.756 "abort": false, 00:16:49.756 "seek_hole": true, 00:16:49.756 "seek_data": true, 00:16:49.756 "copy": false, 00:16:49.756 "nvme_iov_md": false 00:16:49.756 }, 00:16:49.756 "driver_specific": { 00:16:49.756 "lvol": { 00:16:49.756 "lvol_store_uuid": "35847265-ae0a-489b-bac3-c3d989fc0818", 00:16:49.756 "base_bdev": "nvme0n1", 00:16:49.756 "thin_provision": true, 00:16:49.756 "num_allocated_clusters": 0, 00:16:49.756 "snapshot": false, 00:16:49.756 "clone": false, 00:16:49.756 "esnap_clone": false 00:16:49.756 } 00:16:49.756 } 00:16:49.756 } 00:16:49.756 ]' 00:16:49.756 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:49.756 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:49.756 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:49.756 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:49.756 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:49.756 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:49.756 04:22:35 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:49.756 04:22:35 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:49.756 04:22:35 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:50.017 04:22:35 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:50.017 04:22:35 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:50.017 04:22:35 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 42ddcd5d-13db-48d2-9b10-5b8e04990025 00:16:50.017 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=42ddcd5d-13db-48d2-9b10-5b8e04990025 00:16:50.017 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:50.017 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:50.017 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:50.017 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 42ddcd5d-13db-48d2-9b10-5b8e04990025 00:16:50.279 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:50.279 { 00:16:50.279 "name": "42ddcd5d-13db-48d2-9b10-5b8e04990025", 00:16:50.279 "aliases": [ 00:16:50.279 "lvs/nvme0n1p0" 00:16:50.279 ], 00:16:50.279 "product_name": "Logical Volume", 00:16:50.279 "block_size": 4096, 00:16:50.279 "num_blocks": 26476544, 00:16:50.279 "uuid": "42ddcd5d-13db-48d2-9b10-5b8e04990025", 00:16:50.279 "assigned_rate_limits": { 00:16:50.279 "rw_ios_per_sec": 0, 00:16:50.279 "rw_mbytes_per_sec": 0, 00:16:50.279 "r_mbytes_per_sec": 0, 00:16:50.279 "w_mbytes_per_sec": 0 00:16:50.279 }, 00:16:50.279 "claimed": false, 00:16:50.279 "zoned": false, 00:16:50.279 "supported_io_types": { 00:16:50.279 "read": true, 00:16:50.279 "write": true, 00:16:50.279 "unmap": true, 00:16:50.279 "flush": false, 00:16:50.279 "reset": true, 00:16:50.279 "nvme_admin": false, 00:16:50.279 "nvme_io": false, 00:16:50.279 "nvme_io_md": false, 00:16:50.279 "write_zeroes": true, 00:16:50.279 "zcopy": false, 00:16:50.279 "get_zone_info": false, 00:16:50.279 "zone_management": false, 00:16:50.279 "zone_append": false, 00:16:50.279 "compare": false, 00:16:50.279 "compare_and_write": false, 00:16:50.279 "abort": false, 00:16:50.279 "seek_hole": true, 00:16:50.279 "seek_data": true, 00:16:50.279 "copy": false, 00:16:50.279 "nvme_iov_md": false 00:16:50.279 }, 00:16:50.279 "driver_specific": { 00:16:50.279 "lvol": { 00:16:50.279 "lvol_store_uuid": "35847265-ae0a-489b-bac3-c3d989fc0818", 00:16:50.279 "base_bdev": "nvme0n1", 00:16:50.279 "thin_provision": true, 00:16:50.279 "num_allocated_clusters": 0, 00:16:50.279 "snapshot": false, 00:16:50.279 "clone": false, 00:16:50.279 "esnap_clone": false 00:16:50.279 } 00:16:50.279 } 00:16:50.279 } 00:16:50.279 ]' 00:16:50.279 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:50.279 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:50.279 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:50.279 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:50.279 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:50.279 04:22:35 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:50.279 04:22:35 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:50.279 04:22:35 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:50.540 04:22:36 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:50.540 04:22:36 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:50.540 04:22:36 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 42ddcd5d-13db-48d2-9b10-5b8e04990025 00:16:50.540 04:22:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=42ddcd5d-13db-48d2-9b10-5b8e04990025 00:16:50.540 04:22:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:50.541 04:22:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:50.541 04:22:36 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:50.541 04:22:36 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 42ddcd5d-13db-48d2-9b10-5b8e04990025 00:16:50.541 04:22:36 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:50.541 { 00:16:50.541 "name": "42ddcd5d-13db-48d2-9b10-5b8e04990025", 00:16:50.541 "aliases": [ 00:16:50.541 "lvs/nvme0n1p0" 00:16:50.541 ], 00:16:50.541 "product_name": "Logical Volume", 00:16:50.541 "block_size": 4096, 00:16:50.541 "num_blocks": 26476544, 00:16:50.541 "uuid": "42ddcd5d-13db-48d2-9b10-5b8e04990025", 00:16:50.541 "assigned_rate_limits": { 00:16:50.541 "rw_ios_per_sec": 0, 00:16:50.541 "rw_mbytes_per_sec": 0, 00:16:50.541 "r_mbytes_per_sec": 0, 00:16:50.541 "w_mbytes_per_sec": 0 00:16:50.541 }, 00:16:50.541 "claimed": false, 00:16:50.541 "zoned": false, 00:16:50.541 "supported_io_types": { 00:16:50.541 "read": true, 00:16:50.541 "write": true, 00:16:50.541 "unmap": true, 00:16:50.541 "flush": false, 00:16:50.541 "reset": true, 00:16:50.541 "nvme_admin": false, 00:16:50.541 "nvme_io": false, 00:16:50.541 "nvme_io_md": false, 00:16:50.541 "write_zeroes": true, 00:16:50.541 "zcopy": false, 00:16:50.541 "get_zone_info": false, 00:16:50.541 "zone_management": false, 00:16:50.541 "zone_append": false, 00:16:50.541 "compare": false, 00:16:50.541 "compare_and_write": false, 00:16:50.541 "abort": false, 00:16:50.541 "seek_hole": true, 00:16:50.541 "seek_data": true, 00:16:50.541 "copy": false, 00:16:50.541 "nvme_iov_md": false 00:16:50.541 }, 00:16:50.541 "driver_specific": { 00:16:50.541 "lvol": { 00:16:50.541 "lvol_store_uuid": "35847265-ae0a-489b-bac3-c3d989fc0818", 00:16:50.541 "base_bdev": "nvme0n1", 00:16:50.541 "thin_provision": true, 00:16:50.541 "num_allocated_clusters": 0, 00:16:50.541 "snapshot": false, 00:16:50.541 "clone": false, 00:16:50.541 "esnap_clone": false 00:16:50.541 } 00:16:50.541 } 00:16:50.541 } 00:16:50.541 ]' 00:16:50.541 04:22:36 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:50.541 04:22:36 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:50.541 04:22:36 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:50.802 04:22:36 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:50.802 04:22:36 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:50.802 04:22:36 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:50.802 04:22:36 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:50.802 04:22:36 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 42ddcd5d-13db-48d2-9b10-5b8e04990025 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:50.802 [2024-11-17 04:22:36.472591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.802 [2024-11-17 04:22:36.472639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:50.802 [2024-11-17 04:22:36.472653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:50.802 [2024-11-17 04:22:36.472666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.802 [2024-11-17 04:22:36.475212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.802 [2024-11-17 04:22:36.475247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:50.802 [2024-11-17 04:22:36.475258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.522 ms 00:16:50.802 [2024-11-17 04:22:36.475269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.802 [2024-11-17 04:22:36.475362] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:50.802 [2024-11-17 04:22:36.476078] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:50.803 [2024-11-17 04:22:36.476194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.803 [2024-11-17 04:22:36.476211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:50.803 [2024-11-17 04:22:36.476236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:16:50.803 [2024-11-17 04:22:36.476246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.803 [2024-11-17 04:22:36.476547] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3c15d8b6-41d9-49c5-8e71-5e06c15819a7 00:16:50.803 [2024-11-17 04:22:36.478153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.803 [2024-11-17 04:22:36.478320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:50.803 [2024-11-17 04:22:36.478416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:50.803 [2024-11-17 04:22:36.478465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.803 [2024-11-17 04:22:36.485739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.803 [2024-11-17 04:22:36.485902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:50.803 [2024-11-17 04:22:36.485982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.164 ms 00:16:50.803 [2024-11-17 04:22:36.486040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.803 [2024-11-17 04:22:36.486203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.803 [2024-11-17 04:22:36.486324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:50.803 [2024-11-17 04:22:36.486423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:50.803 [2024-11-17 04:22:36.486489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.803 [2024-11-17 04:22:36.486580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.803 [2024-11-17 04:22:36.486690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:50.803 [2024-11-17 04:22:36.486739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:50.803 [2024-11-17 04:22:36.486780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.803 [2024-11-17 04:22:36.486853] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:50.803 [2024-11-17 04:22:36.489087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.803 [2024-11-17 04:22:36.489176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:50.803 [2024-11-17 04:22:36.489220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.244 ms 00:16:50.803 [2024-11-17 04:22:36.489266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.803 [2024-11-17 04:22:36.489344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.803 [2024-11-17 04:22:36.489399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:50.803 [2024-11-17 04:22:36.489447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:50.803 [2024-11-17 04:22:36.489488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.803 [2024-11-17 04:22:36.489563] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:50.803 [2024-11-17 04:22:36.489740] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:50.803 [2024-11-17 04:22:36.489895] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:50.803 [2024-11-17 04:22:36.489968] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:50.803 [2024-11-17 04:22:36.490020] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:50.803 [2024-11-17 04:22:36.490066] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:50.803 [2024-11-17 04:22:36.490105] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:50.803 [2024-11-17 04:22:36.490152] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:50.803 [2024-11-17 04:22:36.490195] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:50.803 [2024-11-17 04:22:36.490301] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:50.803 [2024-11-17 04:22:36.490357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.803 [2024-11-17 04:22:36.490414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:50.803 [2024-11-17 04:22:36.490455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:16:50.803 [2024-11-17 04:22:36.490502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.803 [2024-11-17 04:22:36.490700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.803 [2024-11-17 04:22:36.490760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:50.803 [2024-11-17 04:22:36.490809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:50.803 [2024-11-17 04:22:36.490852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.803 [2024-11-17 04:22:36.491006] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:50.803 [2024-11-17 04:22:36.491117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:50.803 [2024-11-17 04:22:36.491183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:50.803 [2024-11-17 04:22:36.491241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.803 [2024-11-17 04:22:36.491309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:50.803 [2024-11-17 04:22:36.491426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:50.803 [2024-11-17 04:22:36.491477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:50.803 [2024-11-17 04:22:36.491532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:50.803 [2024-11-17 04:22:36.491575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:50.803 [2024-11-17 04:22:36.491630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:50.803 [2024-11-17 04:22:36.491674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:50.803 [2024-11-17 04:22:36.491756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:50.803 [2024-11-17 04:22:36.491805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:50.803 [2024-11-17 04:22:36.491853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:50.803 [2024-11-17 04:22:36.491894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:50.803 [2024-11-17 04:22:36.491981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.803 [2024-11-17 04:22:36.492026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:50.803 [2024-11-17 04:22:36.492066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:50.803 [2024-11-17 04:22:36.492107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.803 [2024-11-17 04:22:36.492187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:50.803 [2024-11-17 04:22:36.492243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:50.803 [2024-11-17 04:22:36.492285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:50.803 [2024-11-17 04:22:36.492329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:50.803 [2024-11-17 04:22:36.492427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:50.803 [2024-11-17 04:22:36.492485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:50.803 [2024-11-17 04:22:36.492522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:50.803 [2024-11-17 04:22:36.492563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:50.803 [2024-11-17 04:22:36.492605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:50.803 [2024-11-17 04:22:36.492686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:50.803 [2024-11-17 04:22:36.492746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:50.803 [2024-11-17 04:22:36.492788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:50.803 [2024-11-17 04:22:36.492824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:50.803 [2024-11-17 04:22:36.492914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:50.803 [2024-11-17 04:22:36.492972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:50.803 [2024-11-17 04:22:36.493017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:50.803 [2024-11-17 04:22:36.493028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:50.803 [2024-11-17 04:22:36.493036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:50.803 [2024-11-17 04:22:36.493044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:50.803 [2024-11-17 04:22:36.493051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:50.803 [2024-11-17 04:22:36.493061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.803 [2024-11-17 04:22:36.493068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:50.803 [2024-11-17 04:22:36.493077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:50.803 [2024-11-17 04:22:36.493084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.803 [2024-11-17 04:22:36.493092] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:50.803 [2024-11-17 04:22:36.493100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:50.803 [2024-11-17 04:22:36.493112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:50.803 [2024-11-17 04:22:36.493119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.803 [2024-11-17 04:22:36.493129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:50.803 [2024-11-17 04:22:36.493136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:50.804 [2024-11-17 04:22:36.493144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:50.804 [2024-11-17 04:22:36.493151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:50.804 [2024-11-17 04:22:36.493159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:50.804 [2024-11-17 04:22:36.493166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:50.804 [2024-11-17 04:22:36.493178] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:50.804 [2024-11-17 04:22:36.493188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:50.804 [2024-11-17 04:22:36.493198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:50.804 [2024-11-17 04:22:36.493206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:50.804 [2024-11-17 04:22:36.493214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:50.804 [2024-11-17 04:22:36.493221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:50.804 [2024-11-17 04:22:36.493230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:50.804 [2024-11-17 04:22:36.493237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:50.804 [2024-11-17 04:22:36.493247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:50.804 [2024-11-17 04:22:36.493254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:50.804 [2024-11-17 04:22:36.493264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:50.804 [2024-11-17 04:22:36.493272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:50.804 [2024-11-17 04:22:36.493280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:50.804 [2024-11-17 04:22:36.493287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:50.804 [2024-11-17 04:22:36.493298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:50.804 [2024-11-17 04:22:36.493306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:50.804 [2024-11-17 04:22:36.493315] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:50.804 [2024-11-17 04:22:36.493323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:50.804 [2024-11-17 04:22:36.493334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:50.804 [2024-11-17 04:22:36.493342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:50.804 [2024-11-17 04:22:36.493351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:50.804 [2024-11-17 04:22:36.493358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:50.804 [2024-11-17 04:22:36.493368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.804 [2024-11-17 04:22:36.494550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:50.804 [2024-11-17 04:22:36.494633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.428 ms 00:16:50.804 [2024-11-17 04:22:36.494775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.804 [2024-11-17 04:22:36.494928] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:50.804 [2024-11-17 04:22:36.495040] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:53.407 [2024-11-17 04:22:38.820338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.407 [2024-11-17 04:22:38.820447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:53.407 [2024-11-17 04:22:38.820477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2325.389 ms 00:16:53.407 [2024-11-17 04:22:38.820511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.407 [2024-11-17 04:22:38.830059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.407 [2024-11-17 04:22:38.830098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:53.407 [2024-11-17 04:22:38.830123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.399 ms 00:16:53.407 [2024-11-17 04:22:38.830130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.407 [2024-11-17 04:22:38.830265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.407 [2024-11-17 04:22:38.830275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:53.407 [2024-11-17 04:22:38.830285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:53.407 [2024-11-17 04:22:38.830305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.407 [2024-11-17 04:22:38.848429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.407 [2024-11-17 04:22:38.848675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:53.407 [2024-11-17 04:22:38.849054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.083 ms 00:16:53.407 [2024-11-17 04:22:38.849235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.407 [2024-11-17 04:22:38.849535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.407 [2024-11-17 04:22:38.849857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:53.407 [2024-11-17 04:22:38.850083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:53.407 [2024-11-17 04:22:38.850279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.407 [2024-11-17 04:22:38.850944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.407 [2024-11-17 04:22:38.851197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:53.407 [2024-11-17 04:22:38.851466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:16:53.407 [2024-11-17 04:22:38.851605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.407 [2024-11-17 04:22:38.852081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.407 [2024-11-17 04:22:38.852108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:53.407 [2024-11-17 04:22:38.852129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:16:53.407 [2024-11-17 04:22:38.852147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.407 [2024-11-17 04:22:38.860632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.407 [2024-11-17 04:22:38.860871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:53.407 [2024-11-17 04:22:38.861069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.427 ms 00:16:53.407 [2024-11-17 04:22:38.861187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.407 [2024-11-17 04:22:38.869554] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:53.407 [2024-11-17 04:22:38.884268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.407 [2024-11-17 04:22:38.884442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:53.407 [2024-11-17 04:22:38.884545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.942 ms 00:16:53.407 [2024-11-17 04:22:38.884609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.407 [2024-11-17 04:22:38.936637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.407 [2024-11-17 04:22:38.936819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:53.407 [2024-11-17 04:22:38.936939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.863 ms 00:16:53.407 [2024-11-17 04:22:38.937009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.407 [2024-11-17 04:22:38.937280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.407 [2024-11-17 04:22:38.937345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:53.407 [2024-11-17 04:22:38.937472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:16:53.407 [2024-11-17 04:22:38.937534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.407 [2024-11-17 04:22:38.940886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.407 [2024-11-17 04:22:38.941037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:53.407 [2024-11-17 04:22:38.941152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.281 ms 00:16:53.407 [2024-11-17 04:22:38.941217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.407 [2024-11-17 04:22:38.943841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.407 [2024-11-17 04:22:38.943992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:53.407 [2024-11-17 04:22:38.944122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.522 ms 00:16:53.408 [2024-11-17 04:22:38.944251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.408 [2024-11-17 04:22:38.944625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.408 [2024-11-17 04:22:38.944757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:53.408 [2024-11-17 04:22:38.944858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:16:53.408 [2024-11-17 04:22:38.944926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.408 [2024-11-17 04:22:38.972735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.408 [2024-11-17 04:22:38.972891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:53.408 [2024-11-17 04:22:38.972993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.718 ms 00:16:53.408 [2024-11-17 04:22:38.973059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.408 [2024-11-17 04:22:38.977126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.408 [2024-11-17 04:22:38.977275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:53.408 [2024-11-17 04:22:38.977410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.898 ms 00:16:53.408 [2024-11-17 04:22:38.977472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.408 [2024-11-17 04:22:38.980656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.408 [2024-11-17 04:22:38.980800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:53.408 [2024-11-17 04:22:38.980903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.017 ms 00:16:53.408 [2024-11-17 04:22:38.980966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.408 [2024-11-17 04:22:38.984247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.408 [2024-11-17 04:22:38.984449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:53.408 [2024-11-17 04:22:38.984574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.152 ms 00:16:53.408 [2024-11-17 04:22:38.984676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.408 [2024-11-17 04:22:38.984805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.408 [2024-11-17 04:22:38.984873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:53.408 [2024-11-17 04:22:38.984923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:53.408 [2024-11-17 04:22:38.985005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.408 [2024-11-17 04:22:38.985124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.408 [2024-11-17 04:22:38.985217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:53.408 [2024-11-17 04:22:38.985272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:53.408 [2024-11-17 04:22:38.985314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.408 [2024-11-17 04:22:38.986321] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:53.408 [2024-11-17 04:22:38.987341] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2513.458 ms, result 0 00:16:53.408 [2024-11-17 04:22:38.987980] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:53.408 { 00:16:53.408 "name": "ftl0", 00:16:53.408 "uuid": "3c15d8b6-41d9-49c5-8e71-5e06c15819a7" 00:16:53.408 } 00:16:53.408 04:22:39 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:53.408 04:22:39 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:16:53.408 04:22:39 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.408 04:22:39 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:16:53.408 04:22:39 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.408 04:22:39 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.408 04:22:39 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:53.666 04:22:39 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:53.923 [ 00:16:53.923 { 00:16:53.923 "name": "ftl0", 00:16:53.923 "aliases": [ 00:16:53.923 "3c15d8b6-41d9-49c5-8e71-5e06c15819a7" 00:16:53.923 ], 00:16:53.923 "product_name": "FTL disk", 00:16:53.923 "block_size": 4096, 00:16:53.923 "num_blocks": 23592960, 00:16:53.923 "uuid": "3c15d8b6-41d9-49c5-8e71-5e06c15819a7", 00:16:53.923 "assigned_rate_limits": { 00:16:53.923 "rw_ios_per_sec": 0, 00:16:53.923 "rw_mbytes_per_sec": 0, 00:16:53.923 "r_mbytes_per_sec": 0, 00:16:53.923 "w_mbytes_per_sec": 0 00:16:53.923 }, 00:16:53.923 "claimed": false, 00:16:53.923 "zoned": false, 00:16:53.923 "supported_io_types": { 00:16:53.923 "read": true, 00:16:53.923 "write": true, 00:16:53.923 "unmap": true, 00:16:53.923 "flush": true, 00:16:53.923 "reset": false, 00:16:53.923 "nvme_admin": false, 00:16:53.923 "nvme_io": false, 00:16:53.923 "nvme_io_md": false, 00:16:53.923 "write_zeroes": true, 00:16:53.923 "zcopy": false, 00:16:53.923 "get_zone_info": false, 00:16:53.923 "zone_management": false, 00:16:53.923 "zone_append": false, 00:16:53.923 "compare": false, 00:16:53.923 "compare_and_write": false, 00:16:53.923 "abort": false, 00:16:53.923 "seek_hole": false, 00:16:53.923 "seek_data": false, 00:16:53.923 "copy": false, 00:16:53.923 "nvme_iov_md": false 00:16:53.923 }, 00:16:53.923 "driver_specific": { 00:16:53.923 "ftl": { 00:16:53.923 "base_bdev": "42ddcd5d-13db-48d2-9b10-5b8e04990025", 00:16:53.923 "cache": "nvc0n1p0" 00:16:53.923 } 00:16:53.923 } 00:16:53.923 } 00:16:53.923 ] 00:16:53.923 04:22:39 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:16:53.923 04:22:39 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:53.923 04:22:39 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:53.923 04:22:39 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:53.923 04:22:39 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:54.181 04:22:39 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:54.181 { 00:16:54.181 "name": "ftl0", 00:16:54.181 "aliases": [ 00:16:54.181 "3c15d8b6-41d9-49c5-8e71-5e06c15819a7" 00:16:54.181 ], 00:16:54.181 "product_name": "FTL disk", 00:16:54.181 "block_size": 4096, 00:16:54.181 "num_blocks": 23592960, 00:16:54.181 "uuid": "3c15d8b6-41d9-49c5-8e71-5e06c15819a7", 00:16:54.181 "assigned_rate_limits": { 00:16:54.181 "rw_ios_per_sec": 0, 00:16:54.181 "rw_mbytes_per_sec": 0, 00:16:54.181 "r_mbytes_per_sec": 0, 00:16:54.181 "w_mbytes_per_sec": 0 00:16:54.181 }, 00:16:54.181 "claimed": false, 00:16:54.181 "zoned": false, 00:16:54.181 "supported_io_types": { 00:16:54.181 "read": true, 00:16:54.181 "write": true, 00:16:54.181 "unmap": true, 00:16:54.181 "flush": true, 00:16:54.181 "reset": false, 00:16:54.181 "nvme_admin": false, 00:16:54.181 "nvme_io": false, 00:16:54.181 "nvme_io_md": false, 00:16:54.181 "write_zeroes": true, 00:16:54.181 "zcopy": false, 00:16:54.182 "get_zone_info": false, 00:16:54.182 "zone_management": false, 00:16:54.182 "zone_append": false, 00:16:54.182 "compare": false, 00:16:54.182 "compare_and_write": false, 00:16:54.182 "abort": false, 00:16:54.182 "seek_hole": false, 00:16:54.182 "seek_data": false, 00:16:54.182 "copy": false, 00:16:54.182 "nvme_iov_md": false 00:16:54.182 }, 00:16:54.182 "driver_specific": { 00:16:54.182 "ftl": { 00:16:54.182 "base_bdev": "42ddcd5d-13db-48d2-9b10-5b8e04990025", 00:16:54.182 "cache": "nvc0n1p0" 00:16:54.182 } 00:16:54.182 } 00:16:54.182 } 00:16:54.182 ]' 00:16:54.182 04:22:39 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:54.182 04:22:39 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:54.182 04:22:39 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:54.441 [2024-11-17 04:22:40.012118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.441 [2024-11-17 04:22:40.012156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:54.441 [2024-11-17 04:22:40.012172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:54.441 [2024-11-17 04:22:40.012188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.441 [2024-11-17 04:22:40.012236] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:54.441 [2024-11-17 04:22:40.012671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.441 [2024-11-17 04:22:40.012818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:54.441 [2024-11-17 04:22:40.012975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:16:54.441 [2024-11-17 04:22:40.013037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.441 [2024-11-17 04:22:40.013675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.441 [2024-11-17 04:22:40.013805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:54.441 [2024-11-17 04:22:40.013880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:16:54.441 [2024-11-17 04:22:40.014013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.441 [2024-11-17 04:22:40.017754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.441 [2024-11-17 04:22:40.017885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:54.441 [2024-11-17 04:22:40.017984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.631 ms 00:16:54.441 [2024-11-17 04:22:40.018077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.441 [2024-11-17 04:22:40.025164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.441 [2024-11-17 04:22:40.025312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:54.441 [2024-11-17 04:22:40.025430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.002 ms 00:16:54.441 [2024-11-17 04:22:40.025495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.441 [2024-11-17 04:22:40.027162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.441 [2024-11-17 04:22:40.027315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:54.441 [2024-11-17 04:22:40.027364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.487 ms 00:16:54.441 [2024-11-17 04:22:40.027427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.441 [2024-11-17 04:22:40.031154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.441 [2024-11-17 04:22:40.031314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:54.441 [2024-11-17 04:22:40.031390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.652 ms 00:16:54.441 [2024-11-17 04:22:40.031451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.441 [2024-11-17 04:22:40.031669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.441 [2024-11-17 04:22:40.031775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:54.441 [2024-11-17 04:22:40.031842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:16:54.441 [2024-11-17 04:22:40.031940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.441 [2024-11-17 04:22:40.033752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.441 [2024-11-17 04:22:40.033898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:54.441 [2024-11-17 04:22:40.033945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:16:54.441 [2024-11-17 04:22:40.033990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.441 [2024-11-17 04:22:40.035240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.441 [2024-11-17 04:22:40.035407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:54.441 [2024-11-17 04:22:40.035454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.170 ms 00:16:54.441 [2024-11-17 04:22:40.035492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.441 [2024-11-17 04:22:40.036414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.441 [2024-11-17 04:22:40.036562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:54.441 [2024-11-17 04:22:40.036644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:16:54.441 [2024-11-17 04:22:40.036724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.442 [2024-11-17 04:22:40.037733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.442 [2024-11-17 04:22:40.037866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:54.442 [2024-11-17 04:22:40.037953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:16:54.442 [2024-11-17 04:22:40.038014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.442 [2024-11-17 04:22:40.038139] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:54.442 [2024-11-17 04:22:40.038213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.038323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.038459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.038573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.038638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.038726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.038827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.038960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.039062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.039127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.039221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.039356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.039405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.039483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.039531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.039576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.039612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.039650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.039748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.039797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.039836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.039874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.039917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.040958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.041984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.042994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.043035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.043078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:54.442 [2024-11-17 04:22:40.043117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.043996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:54.443 [2024-11-17 04:22:40.044083] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:54.443 [2024-11-17 04:22:40.044132] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3c15d8b6-41d9-49c5-8e71-5e06c15819a7 00:16:54.443 [2024-11-17 04:22:40.044172] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:54.443 [2024-11-17 04:22:40.044207] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:54.443 [2024-11-17 04:22:40.044325] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:54.443 [2024-11-17 04:22:40.044342] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:54.443 [2024-11-17 04:22:40.044351] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:54.443 [2024-11-17 04:22:40.044371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:54.443 [2024-11-17 04:22:40.044399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:54.443 [2024-11-17 04:22:40.044406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:54.443 [2024-11-17 04:22:40.044414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:54.443 [2024-11-17 04:22:40.044422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.443 [2024-11-17 04:22:40.044431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:54.443 [2024-11-17 04:22:40.044440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.284 ms 00:16:54.443 [2024-11-17 04:22:40.044451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.443 [2024-11-17 04:22:40.045998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.443 [2024-11-17 04:22:40.046019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:54.443 [2024-11-17 04:22:40.046028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.495 ms 00:16:54.443 [2024-11-17 04:22:40.046037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.443 [2024-11-17 04:22:40.046137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.443 [2024-11-17 04:22:40.046148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:54.443 [2024-11-17 04:22:40.046156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:16:54.443 [2024-11-17 04:22:40.046165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.443 [2024-11-17 04:22:40.051570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.443 [2024-11-17 04:22:40.051605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:54.443 [2024-11-17 04:22:40.051617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.443 [2024-11-17 04:22:40.051626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.443 [2024-11-17 04:22:40.051700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.443 [2024-11-17 04:22:40.051711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:54.443 [2024-11-17 04:22:40.051719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.443 [2024-11-17 04:22:40.051730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.443 [2024-11-17 04:22:40.051787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.443 [2024-11-17 04:22:40.051800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:54.443 [2024-11-17 04:22:40.051817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.443 [2024-11-17 04:22:40.051826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.443 [2024-11-17 04:22:40.051856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.443 [2024-11-17 04:22:40.051866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:54.443 [2024-11-17 04:22:40.051873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.443 [2024-11-17 04:22:40.051881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.443 [2024-11-17 04:22:40.061292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.443 [2024-11-17 04:22:40.061336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:54.443 [2024-11-17 04:22:40.061346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.443 [2024-11-17 04:22:40.061355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.443 [2024-11-17 04:22:40.069226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.443 [2024-11-17 04:22:40.069269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:54.443 [2024-11-17 04:22:40.069280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.443 [2024-11-17 04:22:40.069291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.443 [2024-11-17 04:22:40.069345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.443 [2024-11-17 04:22:40.069356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:54.443 [2024-11-17 04:22:40.069366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.443 [2024-11-17 04:22:40.069391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.443 [2024-11-17 04:22:40.069454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.443 [2024-11-17 04:22:40.069477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:54.443 [2024-11-17 04:22:40.069494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.443 [2024-11-17 04:22:40.069503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.443 [2024-11-17 04:22:40.069581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.443 [2024-11-17 04:22:40.069592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:54.443 [2024-11-17 04:22:40.069600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.443 [2024-11-17 04:22:40.069610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.443 [2024-11-17 04:22:40.069665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.443 [2024-11-17 04:22:40.069676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:54.443 [2024-11-17 04:22:40.069683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.443 [2024-11-17 04:22:40.069694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.443 [2024-11-17 04:22:40.069749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.443 [2024-11-17 04:22:40.069759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:54.443 [2024-11-17 04:22:40.069767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.443 [2024-11-17 04:22:40.069777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.443 [2024-11-17 04:22:40.069828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.443 [2024-11-17 04:22:40.069840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:54.443 [2024-11-17 04:22:40.069849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.443 [2024-11-17 04:22:40.069859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.443 [2024-11-17 04:22:40.070039] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.914 ms, result 0 00:16:54.443 true 00:16:54.443 04:22:40 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 84931 00:16:54.443 04:22:40 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 84931 ']' 00:16:54.443 04:22:40 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 84931 00:16:54.443 04:22:40 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:16:54.443 04:22:40 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.443 04:22:40 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84931 00:16:54.443 killing process with pid 84931 00:16:54.443 04:22:40 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.443 04:22:40 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.443 04:22:40 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84931' 00:16:54.443 04:22:40 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 84931 00:16:54.443 04:22:40 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 84931 00:16:59.712 04:22:44 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:00.652 65536+0 records in 00:17:00.652 65536+0 records out 00:17:00.652 268435456 bytes (268 MB, 256 MiB) copied, 1.10332 s, 243 MB/s 00:17:00.652 04:22:46 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:00.652 [2024-11-17 04:22:46.126582] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:00.652 [2024-11-17 04:22:46.127025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85096 ] 00:17:00.652 [2024-11-17 04:22:46.291700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.652 [2024-11-17 04:22:46.321293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.914 [2024-11-17 04:22:46.438851] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:00.914 [2024-11-17 04:22:46.439199] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:00.914 [2024-11-17 04:22:46.600256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.914 [2024-11-17 04:22:46.600320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:00.914 [2024-11-17 04:22:46.600341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:00.914 [2024-11-17 04:22:46.600350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.914 [2024-11-17 04:22:46.603081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.914 [2024-11-17 04:22:46.603140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:00.914 [2024-11-17 04:22:46.603152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.709 ms 00:17:00.914 [2024-11-17 04:22:46.603160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.914 [2024-11-17 04:22:46.603270] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:00.914 [2024-11-17 04:22:46.603550] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:00.914 [2024-11-17 04:22:46.603568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.914 [2024-11-17 04:22:46.603579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:00.914 [2024-11-17 04:22:46.603589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:17:00.914 [2024-11-17 04:22:46.603598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.914 [2024-11-17 04:22:46.605544] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:00.914 [2024-11-17 04:22:46.609278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.914 [2024-11-17 04:22:46.609327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:00.914 [2024-11-17 04:22:46.609345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.736 ms 00:17:00.914 [2024-11-17 04:22:46.609359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.914 [2024-11-17 04:22:46.609466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.914 [2024-11-17 04:22:46.609479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:00.914 [2024-11-17 04:22:46.609489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:00.914 [2024-11-17 04:22:46.609497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.914 [2024-11-17 04:22:46.617703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.914 [2024-11-17 04:22:46.617745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:00.914 [2024-11-17 04:22:46.617755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.160 ms 00:17:00.914 [2024-11-17 04:22:46.617765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.914 [2024-11-17 04:22:46.617910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.914 [2024-11-17 04:22:46.617922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:00.914 [2024-11-17 04:22:46.617932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:00.914 [2024-11-17 04:22:46.617940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.914 [2024-11-17 04:22:46.617975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.914 [2024-11-17 04:22:46.617984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:00.914 [2024-11-17 04:22:46.617991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:00.914 [2024-11-17 04:22:46.617999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.914 [2024-11-17 04:22:46.618021] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:00.914 [2024-11-17 04:22:46.620100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.914 [2024-11-17 04:22:46.620137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:00.914 [2024-11-17 04:22:46.620148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.085 ms 00:17:00.914 [2024-11-17 04:22:46.620156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.914 [2024-11-17 04:22:46.620206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.914 [2024-11-17 04:22:46.620215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:00.914 [2024-11-17 04:22:46.620224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:00.915 [2024-11-17 04:22:46.620256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.915 [2024-11-17 04:22:46.620275] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:00.915 [2024-11-17 04:22:46.620297] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:00.915 [2024-11-17 04:22:46.620340] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:00.915 [2024-11-17 04:22:46.620364] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:00.915 [2024-11-17 04:22:46.620512] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:00.915 [2024-11-17 04:22:46.620526] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:00.915 [2024-11-17 04:22:46.620539] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:00.915 [2024-11-17 04:22:46.620550] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:00.915 [2024-11-17 04:22:46.620559] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:00.915 [2024-11-17 04:22:46.620568] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:00.915 [2024-11-17 04:22:46.620575] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:00.915 [2024-11-17 04:22:46.620582] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:00.915 [2024-11-17 04:22:46.620593] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:00.915 [2024-11-17 04:22:46.620603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.915 [2024-11-17 04:22:46.620612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:00.915 [2024-11-17 04:22:46.620621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:17:00.915 [2024-11-17 04:22:46.620629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.915 [2024-11-17 04:22:46.620718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.915 [2024-11-17 04:22:46.620732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:00.915 [2024-11-17 04:22:46.620739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:00.915 [2024-11-17 04:22:46.620748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.915 [2024-11-17 04:22:46.620848] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:00.915 [2024-11-17 04:22:46.620863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:00.915 [2024-11-17 04:22:46.620875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:00.915 [2024-11-17 04:22:46.620891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.915 [2024-11-17 04:22:46.620900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:00.915 [2024-11-17 04:22:46.620910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:00.915 [2024-11-17 04:22:46.620918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:00.915 [2024-11-17 04:22:46.620929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:00.915 [2024-11-17 04:22:46.620938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:00.915 [2024-11-17 04:22:46.620946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:00.915 [2024-11-17 04:22:46.620954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:00.915 [2024-11-17 04:22:46.620962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:00.915 [2024-11-17 04:22:46.620970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:00.915 [2024-11-17 04:22:46.620978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:00.915 [2024-11-17 04:22:46.620986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:00.915 [2024-11-17 04:22:46.620995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.915 [2024-11-17 04:22:46.621002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:00.915 [2024-11-17 04:22:46.621011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:00.915 [2024-11-17 04:22:46.621019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.915 [2024-11-17 04:22:46.621027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:00.915 [2024-11-17 04:22:46.621036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:00.915 [2024-11-17 04:22:46.621046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.915 [2024-11-17 04:22:46.621054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:00.915 [2024-11-17 04:22:46.621069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:00.915 [2024-11-17 04:22:46.621078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.915 [2024-11-17 04:22:46.621086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:00.915 [2024-11-17 04:22:46.621094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:00.915 [2024-11-17 04:22:46.621101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.915 [2024-11-17 04:22:46.621110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:00.915 [2024-11-17 04:22:46.621117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:00.915 [2024-11-17 04:22:46.621125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.915 [2024-11-17 04:22:46.621132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:00.915 [2024-11-17 04:22:46.621140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:00.915 [2024-11-17 04:22:46.621147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:00.915 [2024-11-17 04:22:46.621155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:00.915 [2024-11-17 04:22:46.621162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:00.915 [2024-11-17 04:22:46.621171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:00.915 [2024-11-17 04:22:46.621179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:00.915 [2024-11-17 04:22:46.621186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:00.915 [2024-11-17 04:22:46.621197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.915 [2024-11-17 04:22:46.621205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:00.915 [2024-11-17 04:22:46.621212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:00.915 [2024-11-17 04:22:46.621220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.915 [2024-11-17 04:22:46.621226] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:00.915 [2024-11-17 04:22:46.621235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:00.915 [2024-11-17 04:22:46.621243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:00.915 [2024-11-17 04:22:46.621250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.915 [2024-11-17 04:22:46.621259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:00.915 [2024-11-17 04:22:46.621267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:00.915 [2024-11-17 04:22:46.621274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:00.915 [2024-11-17 04:22:46.621281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:00.915 [2024-11-17 04:22:46.621288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:00.915 [2024-11-17 04:22:46.621294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:00.915 [2024-11-17 04:22:46.621306] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:00.915 [2024-11-17 04:22:46.621320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:00.915 [2024-11-17 04:22:46.621331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:00.915 [2024-11-17 04:22:46.621339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:00.915 [2024-11-17 04:22:46.621347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:00.915 [2024-11-17 04:22:46.621355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:00.915 [2024-11-17 04:22:46.621362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:00.915 [2024-11-17 04:22:46.621370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:00.915 [2024-11-17 04:22:46.621393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:00.915 [2024-11-17 04:22:46.621401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:00.915 [2024-11-17 04:22:46.621408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:00.915 [2024-11-17 04:22:46.621416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:00.915 [2024-11-17 04:22:46.621424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:00.915 [2024-11-17 04:22:46.621432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:00.915 [2024-11-17 04:22:46.621440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:00.915 [2024-11-17 04:22:46.621447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:00.915 [2024-11-17 04:22:46.621455] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:00.915 [2024-11-17 04:22:46.621464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:00.915 [2024-11-17 04:22:46.621478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:00.915 [2024-11-17 04:22:46.621487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:00.915 [2024-11-17 04:22:46.621495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:00.916 [2024-11-17 04:22:46.621504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:00.916 [2024-11-17 04:22:46.621512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.916 [2024-11-17 04:22:46.621521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:00.916 [2024-11-17 04:22:46.621530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:17:00.916 [2024-11-17 04:22:46.621538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.916 [2024-11-17 04:22:46.635963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.916 [2024-11-17 04:22:46.636182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.916 [2024-11-17 04:22:46.636213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.373 ms 00:17:00.916 [2024-11-17 04:22:46.636227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.916 [2024-11-17 04:22:46.636401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.916 [2024-11-17 04:22:46.636418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:00.916 [2024-11-17 04:22:46.636428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:00.916 [2024-11-17 04:22:46.636437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.177 [2024-11-17 04:22:46.656510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.177 [2024-11-17 04:22:46.656573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:01.177 [2024-11-17 04:22:46.656586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.046 ms 00:17:01.177 [2024-11-17 04:22:46.656595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.177 [2024-11-17 04:22:46.656695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.177 [2024-11-17 04:22:46.656711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:01.177 [2024-11-17 04:22:46.656720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:01.177 [2024-11-17 04:22:46.656733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.177 [2024-11-17 04:22:46.657292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.177 [2024-11-17 04:22:46.657331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:01.177 [2024-11-17 04:22:46.657347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:17:01.177 [2024-11-17 04:22:46.657360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.177 [2024-11-17 04:22:46.657601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.177 [2024-11-17 04:22:46.657624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:01.177 [2024-11-17 04:22:46.657641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:17:01.177 [2024-11-17 04:22:46.657652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.177 [2024-11-17 04:22:46.666861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.177 [2024-11-17 04:22:46.667081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:01.177 [2024-11-17 04:22:46.667112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.177 ms 00:17:01.177 [2024-11-17 04:22:46.667124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.177 [2024-11-17 04:22:46.671223] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:01.177 [2024-11-17 04:22:46.671284] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:01.177 [2024-11-17 04:22:46.671298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.177 [2024-11-17 04:22:46.671306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:01.177 [2024-11-17 04:22:46.671315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.017 ms 00:17:01.177 [2024-11-17 04:22:46.671322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.177 [2024-11-17 04:22:46.687267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.177 [2024-11-17 04:22:46.687326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:01.177 [2024-11-17 04:22:46.687339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.845 ms 00:17:01.177 [2024-11-17 04:22:46.687347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.177 [2024-11-17 04:22:46.690261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.177 [2024-11-17 04:22:46.690461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:01.177 [2024-11-17 04:22:46.690480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.802 ms 00:17:01.177 [2024-11-17 04:22:46.690488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.177 [2024-11-17 04:22:46.693178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.177 [2024-11-17 04:22:46.693224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:01.177 [2024-11-17 04:22:46.693234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:17:01.178 [2024-11-17 04:22:46.693241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.178 [2024-11-17 04:22:46.693615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.178 [2024-11-17 04:22:46.693629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:01.178 [2024-11-17 04:22:46.693642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:01.178 [2024-11-17 04:22:46.693650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.178 [2024-11-17 04:22:46.718657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.178 [2024-11-17 04:22:46.718720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:01.178 [2024-11-17 04:22:46.718733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.982 ms 00:17:01.178 [2024-11-17 04:22:46.718742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.178 [2024-11-17 04:22:46.727138] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:01.178 [2024-11-17 04:22:46.746598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.178 [2024-11-17 04:22:46.746650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:01.178 [2024-11-17 04:22:46.746663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.759 ms 00:17:01.178 [2024-11-17 04:22:46.746672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.178 [2024-11-17 04:22:46.746763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.178 [2024-11-17 04:22:46.746780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:01.178 [2024-11-17 04:22:46.746791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:01.178 [2024-11-17 04:22:46.746800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.178 [2024-11-17 04:22:46.746863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.178 [2024-11-17 04:22:46.746873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:01.178 [2024-11-17 04:22:46.746881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:01.178 [2024-11-17 04:22:46.746894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.178 [2024-11-17 04:22:46.746918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.178 [2024-11-17 04:22:46.746928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:01.178 [2024-11-17 04:22:46.746936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:01.178 [2024-11-17 04:22:46.746945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.178 [2024-11-17 04:22:46.746983] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:01.178 [2024-11-17 04:22:46.746995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.178 [2024-11-17 04:22:46.747004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:01.178 [2024-11-17 04:22:46.747019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:01.178 [2024-11-17 04:22:46.747028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.178 [2024-11-17 04:22:46.753203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.178 [2024-11-17 04:22:46.753443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:01.178 [2024-11-17 04:22:46.753464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.154 ms 00:17:01.178 [2024-11-17 04:22:46.753473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.178 [2024-11-17 04:22:46.753570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.178 [2024-11-17 04:22:46.753584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:01.178 [2024-11-17 04:22:46.753600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:01.178 [2024-11-17 04:22:46.753609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.178 [2024-11-17 04:22:46.754664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:01.178 [2024-11-17 04:22:46.756024] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.067 ms, result 0 00:17:01.178 [2024-11-17 04:22:46.757583] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:01.178 [2024-11-17 04:22:46.764717] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:02.123  [2024-11-17T04:22:48.793Z] Copying: 13/256 [MB] (13 MBps) [2024-11-17T04:22:50.182Z] Copying: 31/256 [MB] (17 MBps) [2024-11-17T04:22:51.127Z] Copying: 47/256 [MB] (16 MBps) [2024-11-17T04:22:52.071Z] Copying: 66/256 [MB] (19 MBps) [2024-11-17T04:22:53.016Z] Copying: 90/256 [MB] (24 MBps) [2024-11-17T04:22:53.960Z] Copying: 108/256 [MB] (17 MBps) [2024-11-17T04:22:54.906Z] Copying: 123/256 [MB] (15 MBps) [2024-11-17T04:22:55.850Z] Copying: 134/256 [MB] (10 MBps) [2024-11-17T04:22:56.796Z] Copying: 145/256 [MB] (11 MBps) [2024-11-17T04:22:58.184Z] Copying: 157/256 [MB] (12 MBps) [2024-11-17T04:22:59.130Z] Copying: 170/256 [MB] (12 MBps) [2024-11-17T04:23:00.076Z] Copying: 182/256 [MB] (12 MBps) [2024-11-17T04:23:01.023Z] Copying: 192/256 [MB] (10 MBps) [2024-11-17T04:23:01.970Z] Copying: 202/256 [MB] (10 MBps) [2024-11-17T04:23:02.916Z] Copying: 213/256 [MB] (10 MBps) [2024-11-17T04:23:03.862Z] Copying: 223/256 [MB] (10 MBps) [2024-11-17T04:23:04.809Z] Copying: 233/256 [MB] (10 MBps) [2024-11-17T04:23:06.202Z] Copying: 243/256 [MB] (10 MBps) [2024-11-17T04:23:06.202Z] Copying: 253/256 [MB] (10 MBps) [2024-11-17T04:23:06.202Z] Copying: 256/256 [MB] (average 13 MBps)[2024-11-17 04:23:05.981649] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.475 [2024-11-17 04:23:05.983654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.475 [2024-11-17 04:23:05.983710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:20.475 [2024-11-17 04:23:05.983733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:20.475 [2024-11-17 04:23:05.983743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.475 [2024-11-17 04:23:05.983766] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:20.475 [2024-11-17 04:23:05.984526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.475 [2024-11-17 04:23:05.984556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:20.475 [2024-11-17 04:23:05.984567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:17:20.475 [2024-11-17 04:23:05.984585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.475 [2024-11-17 04:23:05.987686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.475 [2024-11-17 04:23:05.987726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:20.475 [2024-11-17 04:23:05.987738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.070 ms 00:17:20.475 [2024-11-17 04:23:05.987747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.475 [2024-11-17 04:23:05.995515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.475 [2024-11-17 04:23:05.995559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:20.475 [2024-11-17 04:23:05.995570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.742 ms 00:17:20.475 [2024-11-17 04:23:05.995589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.475 [2024-11-17 04:23:06.002724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.475 [2024-11-17 04:23:06.002766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:20.475 [2024-11-17 04:23:06.002777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.049 ms 00:17:20.475 [2024-11-17 04:23:06.002785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.475 [2024-11-17 04:23:06.005918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.475 [2024-11-17 04:23:06.005985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:20.475 [2024-11-17 04:23:06.006003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.058 ms 00:17:20.475 [2024-11-17 04:23:06.006011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.475 [2024-11-17 04:23:06.010925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.475 [2024-11-17 04:23:06.010975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:20.475 [2024-11-17 04:23:06.011003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.834 ms 00:17:20.475 [2024-11-17 04:23:06.011011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.475 [2024-11-17 04:23:06.011150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.475 [2024-11-17 04:23:06.011160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:20.475 [2024-11-17 04:23:06.011169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:20.475 [2024-11-17 04:23:06.011177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.475 [2024-11-17 04:23:06.014284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.475 [2024-11-17 04:23:06.014330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:20.475 [2024-11-17 04:23:06.014341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.077 ms 00:17:20.475 [2024-11-17 04:23:06.014349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.475 [2024-11-17 04:23:06.017443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.475 [2024-11-17 04:23:06.017495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:20.475 [2024-11-17 04:23:06.017507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:17:20.475 [2024-11-17 04:23:06.017515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.475 [2024-11-17 04:23:06.019680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.475 [2024-11-17 04:23:06.019722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:20.475 [2024-11-17 04:23:06.019733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.110 ms 00:17:20.475 [2024-11-17 04:23:06.019740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.475 [2024-11-17 04:23:06.021918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.475 [2024-11-17 04:23:06.021961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:20.475 [2024-11-17 04:23:06.021970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.097 ms 00:17:20.475 [2024-11-17 04:23:06.021978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.475 [2024-11-17 04:23:06.022021] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:20.475 [2024-11-17 04:23:06.022037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:20.475 [2024-11-17 04:23:06.022047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:20.475 [2024-11-17 04:23:06.022056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:20.475 [2024-11-17 04:23:06.022064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:20.475 [2024-11-17 04:23:06.022072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:20.475 [2024-11-17 04:23:06.022079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:20.475 [2024-11-17 04:23:06.022088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:20.475 [2024-11-17 04:23:06.022095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:20.475 [2024-11-17 04:23:06.022105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:20.475 [2024-11-17 04:23:06.022113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:20.475 [2024-11-17 04:23:06.022120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:20.475 [2024-11-17 04:23:06.022127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:20.475 [2024-11-17 04:23:06.022135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:20.476 [2024-11-17 04:23:06.022777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:20.477 [2024-11-17 04:23:06.022783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:20.477 [2024-11-17 04:23:06.022791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:20.477 [2024-11-17 04:23:06.022798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:20.477 [2024-11-17 04:23:06.022805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:20.477 [2024-11-17 04:23:06.022824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:20.477 [2024-11-17 04:23:06.022839] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:20.477 [2024-11-17 04:23:06.022847] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3c15d8b6-41d9-49c5-8e71-5e06c15819a7 00:17:20.477 [2024-11-17 04:23:06.022855] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:20.477 [2024-11-17 04:23:06.022863] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:20.477 [2024-11-17 04:23:06.022874] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:20.477 [2024-11-17 04:23:06.022883] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:20.477 [2024-11-17 04:23:06.022890] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:20.477 [2024-11-17 04:23:06.022898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:20.477 [2024-11-17 04:23:06.022906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:20.477 [2024-11-17 04:23:06.022913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:20.477 [2024-11-17 04:23:06.022919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:20.477 [2024-11-17 04:23:06.022927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.477 [2024-11-17 04:23:06.022943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:20.477 [2024-11-17 04:23:06.022953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:17:20.477 [2024-11-17 04:23:06.022965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.477 [2024-11-17 04:23:06.025358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.477 [2024-11-17 04:23:06.025415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:20.477 [2024-11-17 04:23:06.025434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.375 ms 00:17:20.477 [2024-11-17 04:23:06.025447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.477 [2024-11-17 04:23:06.025579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.477 [2024-11-17 04:23:06.025588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:20.477 [2024-11-17 04:23:06.025598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:17:20.477 [2024-11-17 04:23:06.025605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.477 [2024-11-17 04:23:06.032165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.477 [2024-11-17 04:23:06.032192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.477 [2024-11-17 04:23:06.032202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.477 [2024-11-17 04:23:06.032209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.477 [2024-11-17 04:23:06.032297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.477 [2024-11-17 04:23:06.032307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.477 [2024-11-17 04:23:06.032316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.477 [2024-11-17 04:23:06.032323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.477 [2024-11-17 04:23:06.032359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.477 [2024-11-17 04:23:06.032367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.477 [2024-11-17 04:23:06.032399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.477 [2024-11-17 04:23:06.032407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.477 [2024-11-17 04:23:06.032422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.477 [2024-11-17 04:23:06.032435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.477 [2024-11-17 04:23:06.032442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.477 [2024-11-17 04:23:06.032449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.477 [2024-11-17 04:23:06.041231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.477 [2024-11-17 04:23:06.041264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.477 [2024-11-17 04:23:06.041274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.477 [2024-11-17 04:23:06.041281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.477 [2024-11-17 04:23:06.048363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.477 [2024-11-17 04:23:06.048493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.477 [2024-11-17 04:23:06.048504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.477 [2024-11-17 04:23:06.048511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.477 [2024-11-17 04:23:06.048552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.477 [2024-11-17 04:23:06.048561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.477 [2024-11-17 04:23:06.048569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.477 [2024-11-17 04:23:06.048576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.477 [2024-11-17 04:23:06.048604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.477 [2024-11-17 04:23:06.048612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.477 [2024-11-17 04:23:06.048622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.477 [2024-11-17 04:23:06.048630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.477 [2024-11-17 04:23:06.048692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.477 [2024-11-17 04:23:06.048701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.477 [2024-11-17 04:23:06.048708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.477 [2024-11-17 04:23:06.048716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.477 [2024-11-17 04:23:06.048742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.477 [2024-11-17 04:23:06.048754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:20.477 [2024-11-17 04:23:06.048762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.477 [2024-11-17 04:23:06.048772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.477 [2024-11-17 04:23:06.048808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.477 [2024-11-17 04:23:06.048816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.477 [2024-11-17 04:23:06.048824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.477 [2024-11-17 04:23:06.048831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.477 [2024-11-17 04:23:06.048874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.477 [2024-11-17 04:23:06.048891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.477 [2024-11-17 04:23:06.048903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.477 [2024-11-17 04:23:06.048910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.477 [2024-11-17 04:23:06.049038] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.383 ms, result 0 00:17:20.740 00:17:20.740 00:17:20.740 04:23:06 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85317 00:17:20.740 04:23:06 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:20.740 04:23:06 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85317 00:17:20.740 04:23:06 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 85317 ']' 00:17:20.740 04:23:06 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.740 04:23:06 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.740 04:23:06 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.740 04:23:06 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.740 04:23:06 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:20.740 [2024-11-17 04:23:06.458947] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:20.740 [2024-11-17 04:23:06.459096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85317 ] 00:17:21.000 [2024-11-17 04:23:06.619798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.000 [2024-11-17 04:23:06.648965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.944 04:23:07 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.944 04:23:07 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:21.944 04:23:07 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:21.944 [2024-11-17 04:23:07.522299] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:21.944 [2024-11-17 04:23:07.522392] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.207 [2024-11-17 04:23:07.700236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.207 [2024-11-17 04:23:07.700312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:22.207 [2024-11-17 04:23:07.700335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:22.207 [2024-11-17 04:23:07.700346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.207 [2024-11-17 04:23:07.702890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.207 [2024-11-17 04:23:07.702945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.207 [2024-11-17 04:23:07.702958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.516 ms 00:17:22.207 [2024-11-17 04:23:07.702968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.207 [2024-11-17 04:23:07.703097] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:22.207 [2024-11-17 04:23:07.703393] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:22.207 [2024-11-17 04:23:07.703409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.207 [2024-11-17 04:23:07.703423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.207 [2024-11-17 04:23:07.703433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:17:22.207 [2024-11-17 04:23:07.703444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.207 [2024-11-17 04:23:07.705277] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:22.207 [2024-11-17 04:23:07.709156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.207 [2024-11-17 04:23:07.709203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:22.207 [2024-11-17 04:23:07.709217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.875 ms 00:17:22.207 [2024-11-17 04:23:07.709225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.207 [2024-11-17 04:23:07.709308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.207 [2024-11-17 04:23:07.709318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:22.207 [2024-11-17 04:23:07.709332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:22.207 [2024-11-17 04:23:07.709339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.207 [2024-11-17 04:23:07.717782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.207 [2024-11-17 04:23:07.717829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.207 [2024-11-17 04:23:07.717841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.366 ms 00:17:22.207 [2024-11-17 04:23:07.717849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.207 [2024-11-17 04:23:07.717974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.207 [2024-11-17 04:23:07.717984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.207 [2024-11-17 04:23:07.717997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:22.207 [2024-11-17 04:23:07.718007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.207 [2024-11-17 04:23:07.718035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.207 [2024-11-17 04:23:07.718043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:22.207 [2024-11-17 04:23:07.718056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:22.207 [2024-11-17 04:23:07.718064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.207 [2024-11-17 04:23:07.718089] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:22.207 [2024-11-17 04:23:07.720184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.207 [2024-11-17 04:23:07.720227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.207 [2024-11-17 04:23:07.720238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:17:22.207 [2024-11-17 04:23:07.720263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.207 [2024-11-17 04:23:07.720308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.207 [2024-11-17 04:23:07.720319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:22.207 [2024-11-17 04:23:07.720327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:22.207 [2024-11-17 04:23:07.720337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.207 [2024-11-17 04:23:07.720358] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:22.207 [2024-11-17 04:23:07.720404] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:22.207 [2024-11-17 04:23:07.720449] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:22.207 [2024-11-17 04:23:07.720474] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:22.207 [2024-11-17 04:23:07.720579] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:22.207 [2024-11-17 04:23:07.720592] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:22.207 [2024-11-17 04:23:07.720607] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:22.207 [2024-11-17 04:23:07.720621] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:22.207 [2024-11-17 04:23:07.720630] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:22.207 [2024-11-17 04:23:07.720642] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:22.207 [2024-11-17 04:23:07.720649] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:22.207 [2024-11-17 04:23:07.720662] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:22.207 [2024-11-17 04:23:07.720672] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:22.207 [2024-11-17 04:23:07.720682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.207 [2024-11-17 04:23:07.720689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:22.207 [2024-11-17 04:23:07.720699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:17:22.207 [2024-11-17 04:23:07.720706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.207 [2024-11-17 04:23:07.720797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.207 [2024-11-17 04:23:07.720805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:22.208 [2024-11-17 04:23:07.720816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:22.208 [2024-11-17 04:23:07.720823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.208 [2024-11-17 04:23:07.720928] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:22.208 [2024-11-17 04:23:07.720939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:22.208 [2024-11-17 04:23:07.720951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.208 [2024-11-17 04:23:07.720962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.208 [2024-11-17 04:23:07.720976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:22.208 [2024-11-17 04:23:07.720984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:22.208 [2024-11-17 04:23:07.720995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:22.208 [2024-11-17 04:23:07.721009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:22.208 [2024-11-17 04:23:07.721020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:22.208 [2024-11-17 04:23:07.721028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.208 [2024-11-17 04:23:07.721037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:22.208 [2024-11-17 04:23:07.721045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:22.208 [2024-11-17 04:23:07.721058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.208 [2024-11-17 04:23:07.721066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:22.208 [2024-11-17 04:23:07.721076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:22.208 [2024-11-17 04:23:07.721083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.208 [2024-11-17 04:23:07.721093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:22.208 [2024-11-17 04:23:07.721101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:22.208 [2024-11-17 04:23:07.721110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.208 [2024-11-17 04:23:07.721118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:22.208 [2024-11-17 04:23:07.721129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:22.208 [2024-11-17 04:23:07.721138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.208 [2024-11-17 04:23:07.721147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:22.208 [2024-11-17 04:23:07.721155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:22.208 [2024-11-17 04:23:07.721164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.208 [2024-11-17 04:23:07.721172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:22.208 [2024-11-17 04:23:07.721182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:22.208 [2024-11-17 04:23:07.721189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.208 [2024-11-17 04:23:07.721199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:22.208 [2024-11-17 04:23:07.721206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:22.208 [2024-11-17 04:23:07.721215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.208 [2024-11-17 04:23:07.721222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:22.208 [2024-11-17 04:23:07.721230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:22.208 [2024-11-17 04:23:07.721237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.208 [2024-11-17 04:23:07.721245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:22.208 [2024-11-17 04:23:07.721251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:22.208 [2024-11-17 04:23:07.721261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.208 [2024-11-17 04:23:07.721268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:22.208 [2024-11-17 04:23:07.721281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:22.208 [2024-11-17 04:23:07.721287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.208 [2024-11-17 04:23:07.721295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:22.208 [2024-11-17 04:23:07.721302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:22.208 [2024-11-17 04:23:07.721310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.208 [2024-11-17 04:23:07.721317] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:22.208 [2024-11-17 04:23:07.721327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:22.208 [2024-11-17 04:23:07.721335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.208 [2024-11-17 04:23:07.721344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.208 [2024-11-17 04:23:07.721352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:22.208 [2024-11-17 04:23:07.721361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:22.208 [2024-11-17 04:23:07.721367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:22.208 [2024-11-17 04:23:07.721391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:22.208 [2024-11-17 04:23:07.721397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:22.208 [2024-11-17 04:23:07.721408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:22.208 [2024-11-17 04:23:07.721417] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:22.208 [2024-11-17 04:23:07.721428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.208 [2024-11-17 04:23:07.721442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:22.208 [2024-11-17 04:23:07.721451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:22.208 [2024-11-17 04:23:07.721458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:22.208 [2024-11-17 04:23:07.721467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:22.208 [2024-11-17 04:23:07.721474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:22.208 [2024-11-17 04:23:07.721483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:22.208 [2024-11-17 04:23:07.721490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:22.208 [2024-11-17 04:23:07.721498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:22.208 [2024-11-17 04:23:07.721505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:22.208 [2024-11-17 04:23:07.721515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:22.208 [2024-11-17 04:23:07.721521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:22.208 [2024-11-17 04:23:07.721530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:22.208 [2024-11-17 04:23:07.721537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:22.208 [2024-11-17 04:23:07.721548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:22.208 [2024-11-17 04:23:07.721555] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:22.208 [2024-11-17 04:23:07.721567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.208 [2024-11-17 04:23:07.721576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:22.208 [2024-11-17 04:23:07.721585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:22.208 [2024-11-17 04:23:07.721592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:22.208 [2024-11-17 04:23:07.721601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:22.208 [2024-11-17 04:23:07.721608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.208 [2024-11-17 04:23:07.721618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:22.208 [2024-11-17 04:23:07.721626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:17:22.208 [2024-11-17 04:23:07.721635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.208 [2024-11-17 04:23:07.735258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.208 [2024-11-17 04:23:07.735301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.208 [2024-11-17 04:23:07.735312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.563 ms 00:17:22.208 [2024-11-17 04:23:07.735321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.208 [2024-11-17 04:23:07.735484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.208 [2024-11-17 04:23:07.735502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:22.208 [2024-11-17 04:23:07.735511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:22.208 [2024-11-17 04:23:07.735522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.208 [2024-11-17 04:23:07.747349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.208 [2024-11-17 04:23:07.747421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.208 [2024-11-17 04:23:07.747437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.807 ms 00:17:22.208 [2024-11-17 04:23:07.747447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.208 [2024-11-17 04:23:07.747515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.208 [2024-11-17 04:23:07.747528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.208 [2024-11-17 04:23:07.747536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:22.208 [2024-11-17 04:23:07.747546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.208 [2024-11-17 04:23:07.748034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.748073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.209 [2024-11-17 04:23:07.748084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:17:22.209 [2024-11-17 04:23:07.748095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.748255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.748273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.209 [2024-11-17 04:23:07.748286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:17:22.209 [2024-11-17 04:23:07.748299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.755857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.755898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.209 [2024-11-17 04:23:07.755908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.530 ms 00:17:22.209 [2024-11-17 04:23:07.755918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.759536] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:22.209 [2024-11-17 04:23:07.759580] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:22.209 [2024-11-17 04:23:07.759592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.759602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:22.209 [2024-11-17 04:23:07.759611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.581 ms 00:17:22.209 [2024-11-17 04:23:07.759620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.775103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.775163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:22.209 [2024-11-17 04:23:07.775175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.428 ms 00:17:22.209 [2024-11-17 04:23:07.775187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.777917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.777965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:22.209 [2024-11-17 04:23:07.777974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.643 ms 00:17:22.209 [2024-11-17 04:23:07.777984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.780720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.780765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:22.209 [2024-11-17 04:23:07.780774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.684 ms 00:17:22.209 [2024-11-17 04:23:07.780783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.781122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.781142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:22.209 [2024-11-17 04:23:07.781153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:22.209 [2024-11-17 04:23:07.781163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.813547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.813614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:22.209 [2024-11-17 04:23:07.813628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.361 ms 00:17:22.209 [2024-11-17 04:23:07.813642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.821877] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:22.209 [2024-11-17 04:23:07.840203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.840257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:22.209 [2024-11-17 04:23:07.840273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.455 ms 00:17:22.209 [2024-11-17 04:23:07.840282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.840369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.840407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:22.209 [2024-11-17 04:23:07.840428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:22.209 [2024-11-17 04:23:07.840437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.840497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.840509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:22.209 [2024-11-17 04:23:07.840520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:22.209 [2024-11-17 04:23:07.840531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.840560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.840568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:22.209 [2024-11-17 04:23:07.840583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:22.209 [2024-11-17 04:23:07.840593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.840633] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:22.209 [2024-11-17 04:23:07.840646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.840656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:22.209 [2024-11-17 04:23:07.840664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:22.209 [2024-11-17 04:23:07.840674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.846356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.846413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:22.209 [2024-11-17 04:23:07.846425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.658 ms 00:17:22.209 [2024-11-17 04:23:07.846439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.846525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.209 [2024-11-17 04:23:07.846538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:22.209 [2024-11-17 04:23:07.846552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:22.209 [2024-11-17 04:23:07.846562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.209 [2024-11-17 04:23:07.847681] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:22.209 [2024-11-17 04:23:07.848985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 147.121 ms, result 0 00:17:22.209 [2024-11-17 04:23:07.850569] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:22.209 Some configs were skipped because the RPC state that can call them passed over. 00:17:22.209 04:23:07 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:22.470 [2024-11-17 04:23:08.084583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.470 [2024-11-17 04:23:08.084637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:22.470 [2024-11-17 04:23:08.084653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.015 ms 00:17:22.470 [2024-11-17 04:23:08.084662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.470 [2024-11-17 04:23:08.084700] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.145 ms, result 0 00:17:22.470 true 00:17:22.470 04:23:08 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:22.732 [2024-11-17 04:23:08.292568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.732 [2024-11-17 04:23:08.292626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:22.732 [2024-11-17 04:23:08.292638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.785 ms 00:17:22.732 [2024-11-17 04:23:08.292648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.732 [2024-11-17 04:23:08.292685] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.905 ms, result 0 00:17:22.732 true 00:17:22.732 04:23:08 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85317 00:17:22.732 04:23:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85317 ']' 00:17:22.732 04:23:08 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85317 00:17:22.732 04:23:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:22.732 04:23:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.732 04:23:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85317 00:17:22.732 04:23:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.732 killing process with pid 85317 00:17:22.732 04:23:08 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.732 04:23:08 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85317' 00:17:22.732 04:23:08 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 85317 00:17:22.732 04:23:08 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 85317 00:17:22.995 [2024-11-17 04:23:08.461775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.995 [2024-11-17 04:23:08.461827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:22.995 [2024-11-17 04:23:08.461841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:22.995 [2024-11-17 04:23:08.461850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.995 [2024-11-17 04:23:08.461879] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:22.995 [2024-11-17 04:23:08.462359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.995 [2024-11-17 04:23:08.462394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:22.995 [2024-11-17 04:23:08.462405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:17:22.995 [2024-11-17 04:23:08.462414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.995 [2024-11-17 04:23:08.462691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.995 [2024-11-17 04:23:08.462704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:22.995 [2024-11-17 04:23:08.462713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:17:22.995 [2024-11-17 04:23:08.462723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.995 [2024-11-17 04:23:08.467400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.995 [2024-11-17 04:23:08.467433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:22.995 [2024-11-17 04:23:08.467443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.658 ms 00:17:22.995 [2024-11-17 04:23:08.467452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.995 [2024-11-17 04:23:08.474385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.995 [2024-11-17 04:23:08.474416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:22.995 [2024-11-17 04:23:08.474426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.897 ms 00:17:22.995 [2024-11-17 04:23:08.474436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.995 [2024-11-17 04:23:08.476826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.995 [2024-11-17 04:23:08.476861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:22.995 [2024-11-17 04:23:08.476871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.324 ms 00:17:22.995 [2024-11-17 04:23:08.476879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.995 [2024-11-17 04:23:08.480577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.995 [2024-11-17 04:23:08.480613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:22.995 [2024-11-17 04:23:08.480622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.661 ms 00:17:22.995 [2024-11-17 04:23:08.480634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.995 [2024-11-17 04:23:08.480759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.995 [2024-11-17 04:23:08.480771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:22.995 [2024-11-17 04:23:08.480780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:22.995 [2024-11-17 04:23:08.480789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.995 [2024-11-17 04:23:08.483572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.995 [2024-11-17 04:23:08.483606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:22.995 [2024-11-17 04:23:08.483616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.766 ms 00:17:22.995 [2024-11-17 04:23:08.483629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.995 [2024-11-17 04:23:08.485832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.995 [2024-11-17 04:23:08.485865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:22.995 [2024-11-17 04:23:08.485874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.165 ms 00:17:22.995 [2024-11-17 04:23:08.485883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.995 [2024-11-17 04:23:08.487793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.995 [2024-11-17 04:23:08.487829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:22.995 [2024-11-17 04:23:08.487838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.875 ms 00:17:22.995 [2024-11-17 04:23:08.487846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.995 [2024-11-17 04:23:08.489752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.995 [2024-11-17 04:23:08.489787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:22.995 [2024-11-17 04:23:08.489795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.844 ms 00:17:22.995 [2024-11-17 04:23:08.489803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.995 [2024-11-17 04:23:08.489836] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:22.995 [2024-11-17 04:23:08.489852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:22.995 [2024-11-17 04:23:08.489861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:22.995 [2024-11-17 04:23:08.489873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:22.995 [2024-11-17 04:23:08.489880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:22.995 [2024-11-17 04:23:08.489889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:22.995 [2024-11-17 04:23:08.489897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:22.995 [2024-11-17 04:23:08.489907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:22.995 [2024-11-17 04:23:08.489914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:22.995 [2024-11-17 04:23:08.489923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:22.995 [2024-11-17 04:23:08.489930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:22.995 [2024-11-17 04:23:08.489939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:22.995 [2024-11-17 04:23:08.489946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:22.995 [2024-11-17 04:23:08.489957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:22.995 [2024-11-17 04:23:08.489964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:22.995 [2024-11-17 04:23:08.489973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.489980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.489989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.489996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:22.996 [2024-11-17 04:23:08.490721] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:22.997 [2024-11-17 04:23:08.490729] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3c15d8b6-41d9-49c5-8e71-5e06c15819a7 00:17:22.997 [2024-11-17 04:23:08.490739] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:22.997 [2024-11-17 04:23:08.490748] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:22.997 [2024-11-17 04:23:08.490756] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:22.997 [2024-11-17 04:23:08.490764] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:22.997 [2024-11-17 04:23:08.490773] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:22.997 [2024-11-17 04:23:08.490783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:22.997 [2024-11-17 04:23:08.490792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:22.997 [2024-11-17 04:23:08.490798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:22.997 [2024-11-17 04:23:08.490806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:22.997 [2024-11-17 04:23:08.490813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.997 [2024-11-17 04:23:08.490823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:22.997 [2024-11-17 04:23:08.490831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:17:22.997 [2024-11-17 04:23:08.490843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.997 [2024-11-17 04:23:08.492480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.997 [2024-11-17 04:23:08.492501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:22.997 [2024-11-17 04:23:08.492510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.607 ms 00:17:22.997 [2024-11-17 04:23:08.492520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.997 [2024-11-17 04:23:08.492607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.997 [2024-11-17 04:23:08.492617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:22.997 [2024-11-17 04:23:08.492625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:22.997 [2024-11-17 04:23:08.492634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.997 [2024-11-17 04:23:08.498522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.997 [2024-11-17 04:23:08.498558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.997 [2024-11-17 04:23:08.498568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.997 [2024-11-17 04:23:08.498577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.997 [2024-11-17 04:23:08.498656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.997 [2024-11-17 04:23:08.498667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.997 [2024-11-17 04:23:08.498675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.997 [2024-11-17 04:23:08.498686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.997 [2024-11-17 04:23:08.498726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.997 [2024-11-17 04:23:08.498737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.997 [2024-11-17 04:23:08.498745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.997 [2024-11-17 04:23:08.498753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.997 [2024-11-17 04:23:08.498774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.997 [2024-11-17 04:23:08.498783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.997 [2024-11-17 04:23:08.498791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.997 [2024-11-17 04:23:08.498800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.997 [2024-11-17 04:23:08.509164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.997 [2024-11-17 04:23:08.509203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.997 [2024-11-17 04:23:08.509213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.997 [2024-11-17 04:23:08.509223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.997 [2024-11-17 04:23:08.516883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.997 [2024-11-17 04:23:08.516923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.997 [2024-11-17 04:23:08.516933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.997 [2024-11-17 04:23:08.516944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.997 [2024-11-17 04:23:08.516985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.997 [2024-11-17 04:23:08.516999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.997 [2024-11-17 04:23:08.517007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.997 [2024-11-17 04:23:08.517020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.997 [2024-11-17 04:23:08.517050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.997 [2024-11-17 04:23:08.517059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.997 [2024-11-17 04:23:08.517067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.997 [2024-11-17 04:23:08.517076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.997 [2024-11-17 04:23:08.517137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.997 [2024-11-17 04:23:08.517151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.997 [2024-11-17 04:23:08.517161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.997 [2024-11-17 04:23:08.517170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.997 [2024-11-17 04:23:08.517202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.997 [2024-11-17 04:23:08.517212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:22.997 [2024-11-17 04:23:08.517220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.997 [2024-11-17 04:23:08.517230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.997 [2024-11-17 04:23:08.517269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.997 [2024-11-17 04:23:08.517280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.997 [2024-11-17 04:23:08.517289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.997 [2024-11-17 04:23:08.517298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.997 [2024-11-17 04:23:08.517342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.997 [2024-11-17 04:23:08.517353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.997 [2024-11-17 04:23:08.517361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.997 [2024-11-17 04:23:08.517370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.997 [2024-11-17 04:23:08.517513] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.719 ms, result 0 00:17:22.997 04:23:08 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:22.997 04:23:08 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:23.258 [2024-11-17 04:23:08.766183] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:23.259 [2024-11-17 04:23:08.766315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85353 ] 00:17:23.259 [2024-11-17 04:23:08.930229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.259 [2024-11-17 04:23:08.958646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.520 [2024-11-17 04:23:09.068090] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:23.521 [2024-11-17 04:23:09.068167] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:23.521 [2024-11-17 04:23:09.230370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.521 [2024-11-17 04:23:09.230445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:23.521 [2024-11-17 04:23:09.230460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:23.521 [2024-11-17 04:23:09.230469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.521 [2024-11-17 04:23:09.233075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.521 [2024-11-17 04:23:09.233124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.521 [2024-11-17 04:23:09.233136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:17:23.521 [2024-11-17 04:23:09.233144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.521 [2024-11-17 04:23:09.233259] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:23.521 [2024-11-17 04:23:09.233545] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:23.521 [2024-11-17 04:23:09.233561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.521 [2024-11-17 04:23:09.233574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.521 [2024-11-17 04:23:09.233584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:17:23.521 [2024-11-17 04:23:09.233594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.521 [2024-11-17 04:23:09.235419] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:23.521 [2024-11-17 04:23:09.239322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.521 [2024-11-17 04:23:09.239371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:23.521 [2024-11-17 04:23:09.239410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.905 ms 00:17:23.521 [2024-11-17 04:23:09.239419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.521 [2024-11-17 04:23:09.239522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.521 [2024-11-17 04:23:09.239535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:23.521 [2024-11-17 04:23:09.239545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:23.521 [2024-11-17 04:23:09.239553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.784 [2024-11-17 04:23:09.248161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.784 [2024-11-17 04:23:09.248200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.784 [2024-11-17 04:23:09.248219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.561 ms 00:17:23.784 [2024-11-17 04:23:09.248227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.785 [2024-11-17 04:23:09.248427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.785 [2024-11-17 04:23:09.248444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.785 [2024-11-17 04:23:09.248458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:17:23.785 [2024-11-17 04:23:09.248467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.785 [2024-11-17 04:23:09.248500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.785 [2024-11-17 04:23:09.248509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:23.785 [2024-11-17 04:23:09.248518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:23.785 [2024-11-17 04:23:09.248525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.785 [2024-11-17 04:23:09.248547] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:23.785 [2024-11-17 04:23:09.250666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.785 [2024-11-17 04:23:09.250698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.785 [2024-11-17 04:23:09.250708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.124 ms 00:17:23.785 [2024-11-17 04:23:09.250716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.785 [2024-11-17 04:23:09.250768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.785 [2024-11-17 04:23:09.250777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:23.785 [2024-11-17 04:23:09.250791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:23.785 [2024-11-17 04:23:09.250799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.785 [2024-11-17 04:23:09.250819] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:23.785 [2024-11-17 04:23:09.250839] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:23.785 [2024-11-17 04:23:09.250877] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:23.785 [2024-11-17 04:23:09.250897] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:23.785 [2024-11-17 04:23:09.251004] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:23.785 [2024-11-17 04:23:09.251016] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:23.785 [2024-11-17 04:23:09.251026] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:23.785 [2024-11-17 04:23:09.251037] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:23.785 [2024-11-17 04:23:09.251047] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:23.785 [2024-11-17 04:23:09.251055] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:23.785 [2024-11-17 04:23:09.251063] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:23.785 [2024-11-17 04:23:09.251071] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:23.785 [2024-11-17 04:23:09.251081] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:23.785 [2024-11-17 04:23:09.251095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.785 [2024-11-17 04:23:09.251103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:23.785 [2024-11-17 04:23:09.251111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:17:23.785 [2024-11-17 04:23:09.251119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.785 [2024-11-17 04:23:09.251206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.785 [2024-11-17 04:23:09.251219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:23.785 [2024-11-17 04:23:09.251227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:23.785 [2024-11-17 04:23:09.251235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.785 [2024-11-17 04:23:09.251335] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:23.785 [2024-11-17 04:23:09.251346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:23.785 [2024-11-17 04:23:09.251359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.785 [2024-11-17 04:23:09.251392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.785 [2024-11-17 04:23:09.251401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:23.785 [2024-11-17 04:23:09.251409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:23.785 [2024-11-17 04:23:09.251417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:23.785 [2024-11-17 04:23:09.251429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:23.785 [2024-11-17 04:23:09.251438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:23.785 [2024-11-17 04:23:09.251446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.785 [2024-11-17 04:23:09.251454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:23.785 [2024-11-17 04:23:09.251463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:23.785 [2024-11-17 04:23:09.251470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.785 [2024-11-17 04:23:09.251478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:23.785 [2024-11-17 04:23:09.251488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:23.785 [2024-11-17 04:23:09.251497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.785 [2024-11-17 04:23:09.251505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:23.785 [2024-11-17 04:23:09.251514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:23.785 [2024-11-17 04:23:09.251521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.785 [2024-11-17 04:23:09.251530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:23.785 [2024-11-17 04:23:09.251539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:23.785 [2024-11-17 04:23:09.251546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.785 [2024-11-17 04:23:09.251553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:23.785 [2024-11-17 04:23:09.251565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:23.785 [2024-11-17 04:23:09.251574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.785 [2024-11-17 04:23:09.251582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:23.785 [2024-11-17 04:23:09.251590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:23.785 [2024-11-17 04:23:09.251597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.785 [2024-11-17 04:23:09.251605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:23.785 [2024-11-17 04:23:09.251613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:23.785 [2024-11-17 04:23:09.251621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.785 [2024-11-17 04:23:09.251628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:23.785 [2024-11-17 04:23:09.251636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:23.785 [2024-11-17 04:23:09.251643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.785 [2024-11-17 04:23:09.251652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:23.785 [2024-11-17 04:23:09.251660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:23.785 [2024-11-17 04:23:09.251667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.785 [2024-11-17 04:23:09.251675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:23.785 [2024-11-17 04:23:09.251682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:23.785 [2024-11-17 04:23:09.251694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.785 [2024-11-17 04:23:09.251701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:23.785 [2024-11-17 04:23:09.251708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:23.785 [2024-11-17 04:23:09.251715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.785 [2024-11-17 04:23:09.251721] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:23.785 [2024-11-17 04:23:09.251730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:23.785 [2024-11-17 04:23:09.251738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.785 [2024-11-17 04:23:09.251748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.785 [2024-11-17 04:23:09.251761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:23.785 [2024-11-17 04:23:09.251768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:23.785 [2024-11-17 04:23:09.251775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:23.785 [2024-11-17 04:23:09.251782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:23.785 [2024-11-17 04:23:09.251789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:23.785 [2024-11-17 04:23:09.251796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:23.785 [2024-11-17 04:23:09.251805] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:23.785 [2024-11-17 04:23:09.251814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.785 [2024-11-17 04:23:09.251825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:23.785 [2024-11-17 04:23:09.251832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:23.785 [2024-11-17 04:23:09.251839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:23.785 [2024-11-17 04:23:09.251846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:23.785 [2024-11-17 04:23:09.251853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:23.785 [2024-11-17 04:23:09.251861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:23.785 [2024-11-17 04:23:09.251868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:23.786 [2024-11-17 04:23:09.251875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:23.786 [2024-11-17 04:23:09.251881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:23.786 [2024-11-17 04:23:09.251888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:23.786 [2024-11-17 04:23:09.251896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:23.786 [2024-11-17 04:23:09.251903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:23.786 [2024-11-17 04:23:09.251910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:23.786 [2024-11-17 04:23:09.251917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:23.786 [2024-11-17 04:23:09.251924] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:23.786 [2024-11-17 04:23:09.251933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.786 [2024-11-17 04:23:09.251948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:23.786 [2024-11-17 04:23:09.251957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:23.786 [2024-11-17 04:23:09.251964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:23.786 [2024-11-17 04:23:09.251972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:23.786 [2024-11-17 04:23:09.251979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.251987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:23.786 [2024-11-17 04:23:09.251996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:17:23.786 [2024-11-17 04:23:09.252004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.266591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.266633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.786 [2024-11-17 04:23:09.266648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.528 ms 00:17:23.786 [2024-11-17 04:23:09.266657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.266790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.266808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:23.786 [2024-11-17 04:23:09.266817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:23.786 [2024-11-17 04:23:09.266825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.286584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.286640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.786 [2024-11-17 04:23:09.286657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.733 ms 00:17:23.786 [2024-11-17 04:23:09.286669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.286789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.286810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.786 [2024-11-17 04:23:09.286822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:23.786 [2024-11-17 04:23:09.286833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.287406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.287444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.786 [2024-11-17 04:23:09.287460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:17:23.786 [2024-11-17 04:23:09.287471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.287682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.287696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.786 [2024-11-17 04:23:09.287712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:17:23.786 [2024-11-17 04:23:09.287723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.296531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.296574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.786 [2024-11-17 04:23:09.296595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.776 ms 00:17:23.786 [2024-11-17 04:23:09.296612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.300599] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:23.786 [2024-11-17 04:23:09.300644] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:23.786 [2024-11-17 04:23:09.300658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.300667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:23.786 [2024-11-17 04:23:09.300677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.892 ms 00:17:23.786 [2024-11-17 04:23:09.300685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.316210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.316266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:23.786 [2024-11-17 04:23:09.316279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.445 ms 00:17:23.786 [2024-11-17 04:23:09.316288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.319298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.319343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:23.786 [2024-11-17 04:23:09.319353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.889 ms 00:17:23.786 [2024-11-17 04:23:09.319361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.322020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.322063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:23.786 [2024-11-17 04:23:09.322073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.578 ms 00:17:23.786 [2024-11-17 04:23:09.322080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.322499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.322530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:23.786 [2024-11-17 04:23:09.322540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:17:23.786 [2024-11-17 04:23:09.322548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.347551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.347609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:23.786 [2024-11-17 04:23:09.347631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.977 ms 00:17:23.786 [2024-11-17 04:23:09.347644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.355954] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:23.786 [2024-11-17 04:23:09.374659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.374702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:23.786 [2024-11-17 04:23:09.374715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.918 ms 00:17:23.786 [2024-11-17 04:23:09.374723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.374815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.374828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:23.786 [2024-11-17 04:23:09.374838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:23.786 [2024-11-17 04:23:09.374850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.374906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.374916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:23.786 [2024-11-17 04:23:09.374925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:23.786 [2024-11-17 04:23:09.374933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.374956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.374964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:23.786 [2024-11-17 04:23:09.374978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:23.786 [2024-11-17 04:23:09.374986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.375027] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:23.786 [2024-11-17 04:23:09.375038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.375047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:23.786 [2024-11-17 04:23:09.375056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:23.786 [2024-11-17 04:23:09.375064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.381065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.381110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:23.786 [2024-11-17 04:23:09.381122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.980 ms 00:17:23.786 [2024-11-17 04:23:09.381130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.381232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.786 [2024-11-17 04:23:09.381244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:23.786 [2024-11-17 04:23:09.381253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:23.786 [2024-11-17 04:23:09.381261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.786 [2024-11-17 04:23:09.382311] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:23.786 [2024-11-17 04:23:09.383670] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.644 ms, result 0 00:17:23.786 [2024-11-17 04:23:09.385082] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.787 [2024-11-17 04:23:09.392310] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:24.778  [2024-11-17T04:23:11.484Z] Copying: 14/256 [MB] (14 MBps) [2024-11-17T04:23:12.428Z] Copying: 34/256 [MB] (20 MBps) [2024-11-17T04:23:13.812Z] Copying: 49/256 [MB] (14 MBps) [2024-11-17T04:23:14.755Z] Copying: 64/256 [MB] (15 MBps) [2024-11-17T04:23:15.700Z] Copying: 80/256 [MB] (15 MBps) [2024-11-17T04:23:16.644Z] Copying: 92/256 [MB] (11 MBps) [2024-11-17T04:23:17.589Z] Copying: 106/256 [MB] (13 MBps) [2024-11-17T04:23:18.534Z] Copying: 122/256 [MB] (16 MBps) [2024-11-17T04:23:19.477Z] Copying: 141/256 [MB] (18 MBps) [2024-11-17T04:23:20.421Z] Copying: 157/256 [MB] (15 MBps) [2024-11-17T04:23:21.809Z] Copying: 169/256 [MB] (11 MBps) [2024-11-17T04:23:22.754Z] Copying: 189/256 [MB] (20 MBps) [2024-11-17T04:23:23.697Z] Copying: 209/256 [MB] (20 MBps) [2024-11-17T04:23:24.640Z] Copying: 221/256 [MB] (11 MBps) [2024-11-17T04:23:25.585Z] Copying: 237/256 [MB] (16 MBps) [2024-11-17T04:23:25.585Z] Copying: 255/256 [MB] (17 MBps) [2024-11-17T04:23:25.585Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-17 04:23:25.452436] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:39.858 [2024-11-17 04:23:25.454840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.858 [2024-11-17 04:23:25.454905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:39.858 [2024-11-17 04:23:25.454927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:39.858 [2024-11-17 04:23:25.454940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.858 [2024-11-17 04:23:25.454964] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:39.858 [2024-11-17 04:23:25.455925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.858 [2024-11-17 04:23:25.455967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:39.858 [2024-11-17 04:23:25.455981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:17:39.858 [2024-11-17 04:23:25.455991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.858 [2024-11-17 04:23:25.456285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.858 [2024-11-17 04:23:25.456299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:39.858 [2024-11-17 04:23:25.456309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:17:39.858 [2024-11-17 04:23:25.456322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.858 [2024-11-17 04:23:25.460037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.858 [2024-11-17 04:23:25.460065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:39.858 [2024-11-17 04:23:25.460076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.699 ms 00:17:39.858 [2024-11-17 04:23:25.460090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.858 [2024-11-17 04:23:25.466979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.858 [2024-11-17 04:23:25.467023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:39.858 [2024-11-17 04:23:25.467036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.857 ms 00:17:39.858 [2024-11-17 04:23:25.467058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.858 [2024-11-17 04:23:25.470251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.858 [2024-11-17 04:23:25.470302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:39.858 [2024-11-17 04:23:25.470312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.139 ms 00:17:39.858 [2024-11-17 04:23:25.470321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.858 [2024-11-17 04:23:25.476838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.858 [2024-11-17 04:23:25.476903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:39.858 [2024-11-17 04:23:25.476914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.443 ms 00:17:39.858 [2024-11-17 04:23:25.476927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.858 [2024-11-17 04:23:25.477068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.858 [2024-11-17 04:23:25.477081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:39.858 [2024-11-17 04:23:25.477090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:39.858 [2024-11-17 04:23:25.477105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.858 [2024-11-17 04:23:25.480634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.858 [2024-11-17 04:23:25.480681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:39.858 [2024-11-17 04:23:25.480691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.509 ms 00:17:39.858 [2024-11-17 04:23:25.480699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.858 [2024-11-17 04:23:25.483842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.859 [2024-11-17 04:23:25.483887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:39.859 [2024-11-17 04:23:25.483898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.084 ms 00:17:39.859 [2024-11-17 04:23:25.483905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.859 [2024-11-17 04:23:25.486222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.859 [2024-11-17 04:23:25.486268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:39.859 [2024-11-17 04:23:25.486278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.274 ms 00:17:39.859 [2024-11-17 04:23:25.486286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.859 [2024-11-17 04:23:25.488491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.859 [2024-11-17 04:23:25.488537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:39.859 [2024-11-17 04:23:25.488547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.129 ms 00:17:39.859 [2024-11-17 04:23:25.488554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.859 [2024-11-17 04:23:25.488596] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:39.859 [2024-11-17 04:23:25.488613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.488992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:39.859 [2024-11-17 04:23:25.489238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:39.860 [2024-11-17 04:23:25.489460] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:39.860 [2024-11-17 04:23:25.489469] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3c15d8b6-41d9-49c5-8e71-5e06c15819a7 00:17:39.860 [2024-11-17 04:23:25.489478] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:39.860 [2024-11-17 04:23:25.489486] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:39.860 [2024-11-17 04:23:25.489496] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:39.860 [2024-11-17 04:23:25.489506] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:39.860 [2024-11-17 04:23:25.489515] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:39.860 [2024-11-17 04:23:25.489524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:39.860 [2024-11-17 04:23:25.489533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:39.860 [2024-11-17 04:23:25.489540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:39.860 [2024-11-17 04:23:25.489548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:39.860 [2024-11-17 04:23:25.489556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.860 [2024-11-17 04:23:25.489569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:39.860 [2024-11-17 04:23:25.489580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:17:39.860 [2024-11-17 04:23:25.489588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.860 [2024-11-17 04:23:25.492735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.860 [2024-11-17 04:23:25.492773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:39.860 [2024-11-17 04:23:25.492788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.126 ms 00:17:39.860 [2024-11-17 04:23:25.492798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.860 [2024-11-17 04:23:25.492972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.860 [2024-11-17 04:23:25.492984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:39.860 [2024-11-17 04:23:25.492994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:17:39.860 [2024-11-17 04:23:25.493001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.860 [2024-11-17 04:23:25.503881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.860 [2024-11-17 04:23:25.503934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:39.860 [2024-11-17 04:23:25.503947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.860 [2024-11-17 04:23:25.503955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.860 [2024-11-17 04:23:25.504054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.860 [2024-11-17 04:23:25.504064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:39.860 [2024-11-17 04:23:25.504073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.860 [2024-11-17 04:23:25.504081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.860 [2024-11-17 04:23:25.504130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.860 [2024-11-17 04:23:25.504142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:39.860 [2024-11-17 04:23:25.504151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.860 [2024-11-17 04:23:25.504160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.860 [2024-11-17 04:23:25.504184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.860 [2024-11-17 04:23:25.504193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:39.860 [2024-11-17 04:23:25.504202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.860 [2024-11-17 04:23:25.504210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.860 [2024-11-17 04:23:25.523634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.860 [2024-11-17 04:23:25.523697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.860 [2024-11-17 04:23:25.523709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.860 [2024-11-17 04:23:25.523720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.860 [2024-11-17 04:23:25.539388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.860 [2024-11-17 04:23:25.539448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.860 [2024-11-17 04:23:25.539461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.860 [2024-11-17 04:23:25.539470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.860 [2024-11-17 04:23:25.539538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.860 [2024-11-17 04:23:25.539549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.860 [2024-11-17 04:23:25.539559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.860 [2024-11-17 04:23:25.539569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.860 [2024-11-17 04:23:25.539606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.860 [2024-11-17 04:23:25.539627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.860 [2024-11-17 04:23:25.539636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.860 [2024-11-17 04:23:25.539648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.860 [2024-11-17 04:23:25.539735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.860 [2024-11-17 04:23:25.539747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.860 [2024-11-17 04:23:25.539757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.860 [2024-11-17 04:23:25.539766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.860 [2024-11-17 04:23:25.539810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.860 [2024-11-17 04:23:25.539822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:39.860 [2024-11-17 04:23:25.539834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.860 [2024-11-17 04:23:25.539844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.860 [2024-11-17 04:23:25.539896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.860 [2024-11-17 04:23:25.539908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.860 [2024-11-17 04:23:25.539919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.860 [2024-11-17 04:23:25.539934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.860 [2024-11-17 04:23:25.539992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.860 [2024-11-17 04:23:25.540006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.860 [2024-11-17 04:23:25.540019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.860 [2024-11-17 04:23:25.540030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.860 [2024-11-17 04:23:25.540218] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.343 ms, result 0 00:17:40.122 00:17:40.122 00:17:40.122 04:23:25 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:40.122 04:23:25 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:40.694 04:23:26 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:40.695 [2024-11-17 04:23:26.385431] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:40.695 [2024-11-17 04:23:26.385585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85543 ] 00:17:40.956 [2024-11-17 04:23:26.547166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.956 [2024-11-17 04:23:26.577354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.219 [2024-11-17 04:23:26.684991] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:41.220 [2024-11-17 04:23:26.685053] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:41.220 [2024-11-17 04:23:26.842004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.220 [2024-11-17 04:23:26.842056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:41.220 [2024-11-17 04:23:26.842073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:41.220 [2024-11-17 04:23:26.842081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.220 [2024-11-17 04:23:26.844461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.220 [2024-11-17 04:23:26.844503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.220 [2024-11-17 04:23:26.844516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.361 ms 00:17:41.220 [2024-11-17 04:23:26.844524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.220 [2024-11-17 04:23:26.844611] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:41.220 [2024-11-17 04:23:26.844927] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:41.220 [2024-11-17 04:23:26.844961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.220 [2024-11-17 04:23:26.844971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.220 [2024-11-17 04:23:26.844989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:17:41.220 [2024-11-17 04:23:26.844996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.220 [2024-11-17 04:23:26.846324] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:41.220 [2024-11-17 04:23:26.849513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.220 [2024-11-17 04:23:26.849556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:41.220 [2024-11-17 04:23:26.849572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.191 ms 00:17:41.220 [2024-11-17 04:23:26.849583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.220 [2024-11-17 04:23:26.849651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.220 [2024-11-17 04:23:26.849660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:41.220 [2024-11-17 04:23:26.849669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:41.220 [2024-11-17 04:23:26.849679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.220 [2024-11-17 04:23:26.855561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.220 [2024-11-17 04:23:26.855595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.220 [2024-11-17 04:23:26.855607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.845 ms 00:17:41.220 [2024-11-17 04:23:26.855621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.220 [2024-11-17 04:23:26.855738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.220 [2024-11-17 04:23:26.855749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.220 [2024-11-17 04:23:26.855758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:41.220 [2024-11-17 04:23:26.855766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.220 [2024-11-17 04:23:26.855794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.220 [2024-11-17 04:23:26.855806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:41.220 [2024-11-17 04:23:26.855819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:41.220 [2024-11-17 04:23:26.855826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.220 [2024-11-17 04:23:26.855846] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:41.220 [2024-11-17 04:23:26.857502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.220 [2024-11-17 04:23:26.857538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.220 [2024-11-17 04:23:26.857548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.656 ms 00:17:41.220 [2024-11-17 04:23:26.857562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.220 [2024-11-17 04:23:26.857641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.220 [2024-11-17 04:23:26.857650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:41.220 [2024-11-17 04:23:26.857661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:41.220 [2024-11-17 04:23:26.857672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.220 [2024-11-17 04:23:26.857690] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:41.220 [2024-11-17 04:23:26.857709] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:41.220 [2024-11-17 04:23:26.857749] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:41.220 [2024-11-17 04:23:26.857767] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:41.220 [2024-11-17 04:23:26.857869] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:41.220 [2024-11-17 04:23:26.857880] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:41.220 [2024-11-17 04:23:26.857890] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:41.220 [2024-11-17 04:23:26.857900] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:41.220 [2024-11-17 04:23:26.857909] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:41.220 [2024-11-17 04:23:26.857925] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:41.220 [2024-11-17 04:23:26.857932] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:41.220 [2024-11-17 04:23:26.857943] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:41.220 [2024-11-17 04:23:26.857952] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:41.220 [2024-11-17 04:23:26.857962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.220 [2024-11-17 04:23:26.857970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:41.220 [2024-11-17 04:23:26.857977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:17:41.220 [2024-11-17 04:23:26.857984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.220 [2024-11-17 04:23:26.858071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.220 [2024-11-17 04:23:26.858080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:41.220 [2024-11-17 04:23:26.858087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:41.220 [2024-11-17 04:23:26.858093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.220 [2024-11-17 04:23:26.858194] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:41.220 [2024-11-17 04:23:26.858211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:41.220 [2024-11-17 04:23:26.858222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.220 [2024-11-17 04:23:26.858237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.220 [2024-11-17 04:23:26.858246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:41.220 [2024-11-17 04:23:26.858254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:41.220 [2024-11-17 04:23:26.858262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:41.220 [2024-11-17 04:23:26.858271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:41.220 [2024-11-17 04:23:26.858280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:41.220 [2024-11-17 04:23:26.858288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.220 [2024-11-17 04:23:26.858296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:41.220 [2024-11-17 04:23:26.858303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:41.220 [2024-11-17 04:23:26.858311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.220 [2024-11-17 04:23:26.858319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:41.220 [2024-11-17 04:23:26.858326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:41.220 [2024-11-17 04:23:26.858334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.220 [2024-11-17 04:23:26.858341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:41.220 [2024-11-17 04:23:26.858349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:41.220 [2024-11-17 04:23:26.858357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.220 [2024-11-17 04:23:26.858366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:41.220 [2024-11-17 04:23:26.858387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:41.220 [2024-11-17 04:23:26.858395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.220 [2024-11-17 04:23:26.858403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:41.220 [2024-11-17 04:23:26.858417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:41.220 [2024-11-17 04:23:26.858424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.220 [2024-11-17 04:23:26.858432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:41.220 [2024-11-17 04:23:26.858440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:41.220 [2024-11-17 04:23:26.858448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.220 [2024-11-17 04:23:26.858455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:41.220 [2024-11-17 04:23:26.858464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:41.220 [2024-11-17 04:23:26.858472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.221 [2024-11-17 04:23:26.858479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:41.221 [2024-11-17 04:23:26.858487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:41.221 [2024-11-17 04:23:26.858495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.221 [2024-11-17 04:23:26.858502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:41.221 [2024-11-17 04:23:26.858510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:41.221 [2024-11-17 04:23:26.858518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.221 [2024-11-17 04:23:26.858526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:41.221 [2024-11-17 04:23:26.858534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:41.221 [2024-11-17 04:23:26.858543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.221 [2024-11-17 04:23:26.858550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:41.221 [2024-11-17 04:23:26.858558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:41.221 [2024-11-17 04:23:26.858565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.221 [2024-11-17 04:23:26.858573] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:41.221 [2024-11-17 04:23:26.858582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:41.221 [2024-11-17 04:23:26.858590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.221 [2024-11-17 04:23:26.858598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.221 [2024-11-17 04:23:26.858606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:41.221 [2024-11-17 04:23:26.858614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:41.221 [2024-11-17 04:23:26.858622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:41.221 [2024-11-17 04:23:26.858632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:41.221 [2024-11-17 04:23:26.858640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:41.221 [2024-11-17 04:23:26.858648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:41.221 [2024-11-17 04:23:26.858659] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:41.221 [2024-11-17 04:23:26.858668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.221 [2024-11-17 04:23:26.858678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:41.221 [2024-11-17 04:23:26.858686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:41.221 [2024-11-17 04:23:26.858693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:41.221 [2024-11-17 04:23:26.858700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:41.221 [2024-11-17 04:23:26.858707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:41.221 [2024-11-17 04:23:26.858714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:41.221 [2024-11-17 04:23:26.858721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:41.221 [2024-11-17 04:23:26.858728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:41.221 [2024-11-17 04:23:26.858734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:41.221 [2024-11-17 04:23:26.858742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:41.221 [2024-11-17 04:23:26.858749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:41.221 [2024-11-17 04:23:26.858755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:41.221 [2024-11-17 04:23:26.858763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:41.221 [2024-11-17 04:23:26.858771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:41.221 [2024-11-17 04:23:26.858778] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:41.221 [2024-11-17 04:23:26.858786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.221 [2024-11-17 04:23:26.858798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:41.221 [2024-11-17 04:23:26.858806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:41.221 [2024-11-17 04:23:26.858813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:41.221 [2024-11-17 04:23:26.858820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:41.221 [2024-11-17 04:23:26.858828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.221 [2024-11-17 04:23:26.858835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:41.221 [2024-11-17 04:23:26.858842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:17:41.221 [2024-11-17 04:23:26.858852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.221 [2024-11-17 04:23:26.871287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.221 [2024-11-17 04:23:26.871331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.221 [2024-11-17 04:23:26.871344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.386 ms 00:17:41.221 [2024-11-17 04:23:26.871353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.221 [2024-11-17 04:23:26.871502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.221 [2024-11-17 04:23:26.871518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:41.221 [2024-11-17 04:23:26.871532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:41.221 [2024-11-17 04:23:26.871540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.221 [2024-11-17 04:23:26.896640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.221 [2024-11-17 04:23:26.896715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:41.221 [2024-11-17 04:23:26.896738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.074 ms 00:17:41.221 [2024-11-17 04:23:26.896754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.221 [2024-11-17 04:23:26.896902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.221 [2024-11-17 04:23:26.896928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:41.221 [2024-11-17 04:23:26.896944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:41.221 [2024-11-17 04:23:26.896959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.221 [2024-11-17 04:23:26.897541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.221 [2024-11-17 04:23:26.897597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:41.221 [2024-11-17 04:23:26.897618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:17:41.221 [2024-11-17 04:23:26.897644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.221 [2024-11-17 04:23:26.897890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.221 [2024-11-17 04:23:26.897917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:41.221 [2024-11-17 04:23:26.897937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:17:41.221 [2024-11-17 04:23:26.897951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.221 [2024-11-17 04:23:26.906518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.221 [2024-11-17 04:23:26.906558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.221 [2024-11-17 04:23:26.906574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.522 ms 00:17:41.221 [2024-11-17 04:23:26.906582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.221 [2024-11-17 04:23:26.910362] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:41.221 [2024-11-17 04:23:26.910442] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:41.221 [2024-11-17 04:23:26.910454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.221 [2024-11-17 04:23:26.910463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:41.221 [2024-11-17 04:23:26.910472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.778 ms 00:17:41.221 [2024-11-17 04:23:26.910479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.221 [2024-11-17 04:23:26.926049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.221 [2024-11-17 04:23:26.926101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:41.221 [2024-11-17 04:23:26.926114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.507 ms 00:17:41.221 [2024-11-17 04:23:26.926123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.221 [2024-11-17 04:23:26.929030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.222 [2024-11-17 04:23:26.929080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:41.222 [2024-11-17 04:23:26.929091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.836 ms 00:17:41.222 [2024-11-17 04:23:26.929099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.222 [2024-11-17 04:23:26.931684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.222 [2024-11-17 04:23:26.931733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:41.222 [2024-11-17 04:23:26.931744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.517 ms 00:17:41.222 [2024-11-17 04:23:26.931752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.222 [2024-11-17 04:23:26.932100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.222 [2024-11-17 04:23:26.932123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:41.222 [2024-11-17 04:23:26.932134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:41.222 [2024-11-17 04:23:26.932142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.483 [2024-11-17 04:23:26.956608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.483 [2024-11-17 04:23:26.956674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:41.483 [2024-11-17 04:23:26.956688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.441 ms 00:17:41.483 [2024-11-17 04:23:26.956697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.483 [2024-11-17 04:23:26.964982] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:41.483 [2024-11-17 04:23:26.984979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.483 [2024-11-17 04:23:26.985041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:41.483 [2024-11-17 04:23:26.985055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.183 ms 00:17:41.483 [2024-11-17 04:23:26.985063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.483 [2024-11-17 04:23:26.985164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.483 [2024-11-17 04:23:26.985182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:41.483 [2024-11-17 04:23:26.985200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:41.483 [2024-11-17 04:23:26.985208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.483 [2024-11-17 04:23:26.985263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.483 [2024-11-17 04:23:26.985274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:41.483 [2024-11-17 04:23:26.985282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:41.483 [2024-11-17 04:23:26.985291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.483 [2024-11-17 04:23:26.985313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.483 [2024-11-17 04:23:26.985329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:41.483 [2024-11-17 04:23:26.985338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:41.483 [2024-11-17 04:23:26.985349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.483 [2024-11-17 04:23:26.985420] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:41.483 [2024-11-17 04:23:26.985433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.483 [2024-11-17 04:23:26.985445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:41.483 [2024-11-17 04:23:26.985455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:41.483 [2024-11-17 04:23:26.985462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.484 [2024-11-17 04:23:26.991543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.484 [2024-11-17 04:23:26.991594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:41.484 [2024-11-17 04:23:26.991607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.058 ms 00:17:41.484 [2024-11-17 04:23:26.991622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.484 [2024-11-17 04:23:26.991718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.484 [2024-11-17 04:23:26.991730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:41.484 [2024-11-17 04:23:26.991739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:41.484 [2024-11-17 04:23:26.991747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.484 [2024-11-17 04:23:26.992800] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.484 [2024-11-17 04:23:26.994152] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 150.467 ms, result 0 00:17:41.484 [2024-11-17 04:23:26.995549] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:41.484 [2024-11-17 04:23:27.002813] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.746  [2024-11-17T04:23:27.473Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-11-17 04:23:27.392971] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:41.746 [2024-11-17 04:23:27.393731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.746 [2024-11-17 04:23:27.393768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:41.746 [2024-11-17 04:23:27.393779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:41.746 [2024-11-17 04:23:27.393786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.746 [2024-11-17 04:23:27.393805] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:41.746 [2024-11-17 04:23:27.394262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.746 [2024-11-17 04:23:27.394290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:41.746 [2024-11-17 04:23:27.394299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:17:41.746 [2024-11-17 04:23:27.394307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.746 [2024-11-17 04:23:27.396627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.746 [2024-11-17 04:23:27.396662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:41.746 [2024-11-17 04:23:27.396677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.300 ms 00:17:41.746 [2024-11-17 04:23:27.396685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.746 [2024-11-17 04:23:27.401094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.746 [2024-11-17 04:23:27.401122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:41.746 [2024-11-17 04:23:27.401131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.394 ms 00:17:41.746 [2024-11-17 04:23:27.401138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.746 [2024-11-17 04:23:27.408070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.746 [2024-11-17 04:23:27.408100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:41.746 [2024-11-17 04:23:27.408109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.905 ms 00:17:41.746 [2024-11-17 04:23:27.408129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.746 [2024-11-17 04:23:27.410772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.746 [2024-11-17 04:23:27.410811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:41.746 [2024-11-17 04:23:27.410820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.592 ms 00:17:41.746 [2024-11-17 04:23:27.410827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.746 [2024-11-17 04:23:27.414857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.746 [2024-11-17 04:23:27.414896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:41.746 [2024-11-17 04:23:27.414905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.996 ms 00:17:41.746 [2024-11-17 04:23:27.414912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.746 [2024-11-17 04:23:27.415016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.746 [2024-11-17 04:23:27.415024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:41.746 [2024-11-17 04:23:27.415038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:41.746 [2024-11-17 04:23:27.415046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.746 [2024-11-17 04:23:27.417709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.746 [2024-11-17 04:23:27.417747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:41.746 [2024-11-17 04:23:27.417756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.647 ms 00:17:41.746 [2024-11-17 04:23:27.417765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.746 [2024-11-17 04:23:27.420488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.746 [2024-11-17 04:23:27.420532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:41.746 [2024-11-17 04:23:27.420540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.686 ms 00:17:41.746 [2024-11-17 04:23:27.420547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.746 [2024-11-17 04:23:27.422260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.746 [2024-11-17 04:23:27.422295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:41.746 [2024-11-17 04:23:27.422303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:17:41.746 [2024-11-17 04:23:27.422309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.746 [2024-11-17 04:23:27.424194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.746 [2024-11-17 04:23:27.424229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:41.746 [2024-11-17 04:23:27.424238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.827 ms 00:17:41.747 [2024-11-17 04:23:27.424245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.747 [2024-11-17 04:23:27.424286] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:41.747 [2024-11-17 04:23:27.424299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:41.747 [2024-11-17 04:23:27.424798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.424996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.425003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.425011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.425019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.425027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.425043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.425051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:41.748 [2024-11-17 04:23:27.425066] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:41.748 [2024-11-17 04:23:27.425075] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3c15d8b6-41d9-49c5-8e71-5e06c15819a7 00:17:41.748 [2024-11-17 04:23:27.425083] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:41.748 [2024-11-17 04:23:27.425090] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:41.748 [2024-11-17 04:23:27.425102] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:41.748 [2024-11-17 04:23:27.425110] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:41.748 [2024-11-17 04:23:27.425117] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:41.748 [2024-11-17 04:23:27.425128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:41.748 [2024-11-17 04:23:27.425135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:41.748 [2024-11-17 04:23:27.425142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:41.748 [2024-11-17 04:23:27.425148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:41.748 [2024-11-17 04:23:27.425155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.748 [2024-11-17 04:23:27.425163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:41.748 [2024-11-17 04:23:27.425171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:17:41.748 [2024-11-17 04:23:27.425178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.748 [2024-11-17 04:23:27.426835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.748 [2024-11-17 04:23:27.426864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:41.748 [2024-11-17 04:23:27.426874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.640 ms 00:17:41.748 [2024-11-17 04:23:27.426886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.748 [2024-11-17 04:23:27.426976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.748 [2024-11-17 04:23:27.426986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:41.748 [2024-11-17 04:23:27.426995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:41.748 [2024-11-17 04:23:27.427002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.748 [2024-11-17 04:23:27.433083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.748 [2024-11-17 04:23:27.433119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.748 [2024-11-17 04:23:27.433134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.748 [2024-11-17 04:23:27.433141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.748 [2024-11-17 04:23:27.433213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.748 [2024-11-17 04:23:27.433222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:41.748 [2024-11-17 04:23:27.433229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.748 [2024-11-17 04:23:27.433236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.748 [2024-11-17 04:23:27.433274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.748 [2024-11-17 04:23:27.433283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:41.748 [2024-11-17 04:23:27.433291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.748 [2024-11-17 04:23:27.433298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.748 [2024-11-17 04:23:27.433321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.748 [2024-11-17 04:23:27.433329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:41.748 [2024-11-17 04:23:27.433340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.748 [2024-11-17 04:23:27.433347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.748 [2024-11-17 04:23:27.443880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.748 [2024-11-17 04:23:27.443925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:41.748 [2024-11-17 04:23:27.443936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.748 [2024-11-17 04:23:27.443949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.748 [2024-11-17 04:23:27.452602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.748 [2024-11-17 04:23:27.452645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.748 [2024-11-17 04:23:27.452655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.748 [2024-11-17 04:23:27.452663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.748 [2024-11-17 04:23:27.452691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.748 [2024-11-17 04:23:27.452699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.748 [2024-11-17 04:23:27.452707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.748 [2024-11-17 04:23:27.452715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.748 [2024-11-17 04:23:27.452749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.748 [2024-11-17 04:23:27.452758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.748 [2024-11-17 04:23:27.452766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.748 [2024-11-17 04:23:27.452774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.748 [2024-11-17 04:23:27.452847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.748 [2024-11-17 04:23:27.452857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.748 [2024-11-17 04:23:27.452865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.749 [2024-11-17 04:23:27.452873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.749 [2024-11-17 04:23:27.452902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.749 [2024-11-17 04:23:27.452916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:41.749 [2024-11-17 04:23:27.452924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.749 [2024-11-17 04:23:27.452931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.749 [2024-11-17 04:23:27.452972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.749 [2024-11-17 04:23:27.452980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.749 [2024-11-17 04:23:27.452988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.749 [2024-11-17 04:23:27.452996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.749 [2024-11-17 04:23:27.453040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.749 [2024-11-17 04:23:27.453050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.749 [2024-11-17 04:23:27.453058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.749 [2024-11-17 04:23:27.453065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.749 [2024-11-17 04:23:27.453205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.442 ms, result 0 00:17:42.009 00:17:42.009 00:17:42.009 04:23:27 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85557 00:17:42.009 04:23:27 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85557 00:17:42.009 04:23:27 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:42.009 04:23:27 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 85557 ']' 00:17:42.009 04:23:27 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.009 04:23:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.009 04:23:27 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.009 04:23:27 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.009 04:23:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:42.009 [2024-11-17 04:23:27.721541] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:42.009 [2024-11-17 04:23:27.721683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85557 ] 00:17:42.273 [2024-11-17 04:23:27.882936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.273 [2024-11-17 04:23:27.916715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.846 04:23:28 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.846 04:23:28 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:42.846 04:23:28 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:43.107 [2024-11-17 04:23:28.764868] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:43.107 [2024-11-17 04:23:28.764929] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:43.370 [2024-11-17 04:23:28.942130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.370 [2024-11-17 04:23:28.942199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:43.370 [2024-11-17 04:23:28.942215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:43.370 [2024-11-17 04:23:28.942225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.370 [2024-11-17 04:23:28.944981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.370 [2024-11-17 04:23:28.945038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:43.370 [2024-11-17 04:23:28.945049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.734 ms 00:17:43.370 [2024-11-17 04:23:28.945058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.370 [2024-11-17 04:23:28.945189] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:43.370 [2024-11-17 04:23:28.945539] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:43.370 [2024-11-17 04:23:28.945563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.370 [2024-11-17 04:23:28.945574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:43.370 [2024-11-17 04:23:28.945584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:17:43.370 [2024-11-17 04:23:28.945595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.370 [2024-11-17 04:23:28.947362] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:43.370 [2024-11-17 04:23:28.951318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.370 [2024-11-17 04:23:28.951394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:43.370 [2024-11-17 04:23:28.951410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.953 ms 00:17:43.371 [2024-11-17 04:23:28.951419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.371 [2024-11-17 04:23:28.951502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.371 [2024-11-17 04:23:28.951513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:43.371 [2024-11-17 04:23:28.951528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:43.371 [2024-11-17 04:23:28.951536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.371 [2024-11-17 04:23:28.960062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.371 [2024-11-17 04:23:28.960109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:43.371 [2024-11-17 04:23:28.960122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.465 ms 00:17:43.371 [2024-11-17 04:23:28.960131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.371 [2024-11-17 04:23:28.960273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.371 [2024-11-17 04:23:28.960288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:43.371 [2024-11-17 04:23:28.960299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:43.371 [2024-11-17 04:23:28.960311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.371 [2024-11-17 04:23:28.960342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.371 [2024-11-17 04:23:28.960350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:43.371 [2024-11-17 04:23:28.960368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:43.371 [2024-11-17 04:23:28.960407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.371 [2024-11-17 04:23:28.960438] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:43.371 [2024-11-17 04:23:28.962543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.371 [2024-11-17 04:23:28.962590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:43.371 [2024-11-17 04:23:28.962600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.113 ms 00:17:43.371 [2024-11-17 04:23:28.962613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.371 [2024-11-17 04:23:28.962653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.371 [2024-11-17 04:23:28.962664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:43.371 [2024-11-17 04:23:28.962673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:43.371 [2024-11-17 04:23:28.962683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.371 [2024-11-17 04:23:28.962705] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:43.371 [2024-11-17 04:23:28.962729] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:43.371 [2024-11-17 04:23:28.962773] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:43.371 [2024-11-17 04:23:28.962799] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:43.371 [2024-11-17 04:23:28.962907] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:43.371 [2024-11-17 04:23:28.962921] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:43.371 [2024-11-17 04:23:28.962933] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:43.371 [2024-11-17 04:23:28.962946] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:43.371 [2024-11-17 04:23:28.962956] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:43.371 [2024-11-17 04:23:28.962972] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:43.371 [2024-11-17 04:23:28.962980] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:43.371 [2024-11-17 04:23:28.962994] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:43.371 [2024-11-17 04:23:28.963004] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:43.371 [2024-11-17 04:23:28.963014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.371 [2024-11-17 04:23:28.963022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:43.371 [2024-11-17 04:23:28.963032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:17:43.371 [2024-11-17 04:23:28.963039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.371 [2024-11-17 04:23:28.963127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.371 [2024-11-17 04:23:28.963137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:43.371 [2024-11-17 04:23:28.963147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:43.371 [2024-11-17 04:23:28.963155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.371 [2024-11-17 04:23:28.963266] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:43.371 [2024-11-17 04:23:28.963285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:43.371 [2024-11-17 04:23:28.963297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:43.371 [2024-11-17 04:23:28.963306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.371 [2024-11-17 04:23:28.963319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:43.371 [2024-11-17 04:23:28.963327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:43.371 [2024-11-17 04:23:28.963338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:43.371 [2024-11-17 04:23:28.963353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:43.371 [2024-11-17 04:23:28.963363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:43.371 [2024-11-17 04:23:28.963389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:43.371 [2024-11-17 04:23:28.963399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:43.371 [2024-11-17 04:23:28.963407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:43.371 [2024-11-17 04:23:28.963417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:43.371 [2024-11-17 04:23:28.963425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:43.371 [2024-11-17 04:23:28.963435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:43.371 [2024-11-17 04:23:28.963443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.371 [2024-11-17 04:23:28.963453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:43.371 [2024-11-17 04:23:28.963462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:43.371 [2024-11-17 04:23:28.963472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.371 [2024-11-17 04:23:28.963479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:43.371 [2024-11-17 04:23:28.963491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:43.371 [2024-11-17 04:23:28.963499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.371 [2024-11-17 04:23:28.963509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:43.371 [2024-11-17 04:23:28.963517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:43.371 [2024-11-17 04:23:28.963529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.371 [2024-11-17 04:23:28.963537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:43.371 [2024-11-17 04:23:28.963548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:43.371 [2024-11-17 04:23:28.963556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.371 [2024-11-17 04:23:28.963566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:43.371 [2024-11-17 04:23:28.963574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:43.371 [2024-11-17 04:23:28.963584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.371 [2024-11-17 04:23:28.963593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:43.371 [2024-11-17 04:23:28.963602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:43.371 [2024-11-17 04:23:28.963610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:43.371 [2024-11-17 04:23:28.963622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:43.371 [2024-11-17 04:23:28.963630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:43.371 [2024-11-17 04:23:28.963641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:43.371 [2024-11-17 04:23:28.963649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:43.371 [2024-11-17 04:23:28.963659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:43.371 [2024-11-17 04:23:28.963668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.371 [2024-11-17 04:23:28.963678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:43.371 [2024-11-17 04:23:28.963686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:43.371 [2024-11-17 04:23:28.963696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.371 [2024-11-17 04:23:28.963703] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:43.371 [2024-11-17 04:23:28.963716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:43.371 [2024-11-17 04:23:28.963724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:43.371 [2024-11-17 04:23:28.963733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.371 [2024-11-17 04:23:28.963741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:43.371 [2024-11-17 04:23:28.963750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:43.371 [2024-11-17 04:23:28.963757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:43.371 [2024-11-17 04:23:28.963766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:43.371 [2024-11-17 04:23:28.963773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:43.371 [2024-11-17 04:23:28.963783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:43.371 [2024-11-17 04:23:28.963793] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:43.371 [2024-11-17 04:23:28.963805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:43.371 [2024-11-17 04:23:28.963813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:43.372 [2024-11-17 04:23:28.963826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:43.372 [2024-11-17 04:23:28.963833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:43.372 [2024-11-17 04:23:28.963842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:43.372 [2024-11-17 04:23:28.963850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:43.372 [2024-11-17 04:23:28.963861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:43.372 [2024-11-17 04:23:28.963868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:43.372 [2024-11-17 04:23:28.963877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:43.372 [2024-11-17 04:23:28.963885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:43.372 [2024-11-17 04:23:28.963894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:43.372 [2024-11-17 04:23:28.963901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:43.372 [2024-11-17 04:23:28.963910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:43.372 [2024-11-17 04:23:28.963918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:43.372 [2024-11-17 04:23:28.963929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:43.372 [2024-11-17 04:23:28.963936] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:43.372 [2024-11-17 04:23:28.963950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:43.372 [2024-11-17 04:23:28.963958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:43.372 [2024-11-17 04:23:28.963966] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:43.372 [2024-11-17 04:23:28.963974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:43.372 [2024-11-17 04:23:28.963983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:43.372 [2024-11-17 04:23:28.963991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:28.964000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:43.372 [2024-11-17 04:23:28.964008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:17:43.372 [2024-11-17 04:23:28.964017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:28.978768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:28.978822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:43.372 [2024-11-17 04:23:28.978840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.688 ms 00:17:43.372 [2024-11-17 04:23:28.978851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:28.978991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:28.979008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:43.372 [2024-11-17 04:23:28.979016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:43.372 [2024-11-17 04:23:28.979026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:28.992392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:28.992442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:43.372 [2024-11-17 04:23:28.992453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.323 ms 00:17:43.372 [2024-11-17 04:23:28.992463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:28.992537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:28.992550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:43.372 [2024-11-17 04:23:28.992559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:43.372 [2024-11-17 04:23:28.992569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:28.993116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:28.993170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:43.372 [2024-11-17 04:23:28.993182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:17:43.372 [2024-11-17 04:23:28.993193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:28.993348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:28.993367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:43.372 [2024-11-17 04:23:28.993407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:17:43.372 [2024-11-17 04:23:28.993421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:29.002247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:29.002302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:43.372 [2024-11-17 04:23:29.002313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.801 ms 00:17:43.372 [2024-11-17 04:23:29.002323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:29.006594] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:43.372 [2024-11-17 04:23:29.006650] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:43.372 [2024-11-17 04:23:29.006663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:29.006674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:43.372 [2024-11-17 04:23:29.006684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.200 ms 00:17:43.372 [2024-11-17 04:23:29.006694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:29.022689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:29.022747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:43.372 [2024-11-17 04:23:29.022760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.912 ms 00:17:43.372 [2024-11-17 04:23:29.022773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:29.025867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:29.025927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:43.372 [2024-11-17 04:23:29.025938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.996 ms 00:17:43.372 [2024-11-17 04:23:29.025948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:29.028824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:29.028879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:43.372 [2024-11-17 04:23:29.028890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.822 ms 00:17:43.372 [2024-11-17 04:23:29.028899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:29.029253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:29.029282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:43.372 [2024-11-17 04:23:29.029293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:17:43.372 [2024-11-17 04:23:29.029303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:29.064786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:29.064863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:43.372 [2024-11-17 04:23:29.064878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.458 ms 00:17:43.372 [2024-11-17 04:23:29.064892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:29.073106] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:43.372 [2024-11-17 04:23:29.092888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:29.092939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:43.372 [2024-11-17 04:23:29.092955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.881 ms 00:17:43.372 [2024-11-17 04:23:29.092964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:29.093059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:29.093070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:43.372 [2024-11-17 04:23:29.093085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:43.372 [2024-11-17 04:23:29.093099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:29.093159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:29.093173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:43.372 [2024-11-17 04:23:29.093184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:43.372 [2024-11-17 04:23:29.093192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:29.093221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:29.093229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:43.372 [2024-11-17 04:23:29.093245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:43.372 [2024-11-17 04:23:29.093255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.372 [2024-11-17 04:23:29.093295] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:43.372 [2024-11-17 04:23:29.093305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.372 [2024-11-17 04:23:29.093316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:43.372 [2024-11-17 04:23:29.093325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:43.372 [2024-11-17 04:23:29.093335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.634 [2024-11-17 04:23:29.099708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.634 [2024-11-17 04:23:29.099775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:43.634 [2024-11-17 04:23:29.099787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.350 ms 00:17:43.634 [2024-11-17 04:23:29.099801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.634 [2024-11-17 04:23:29.099900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.634 [2024-11-17 04:23:29.099914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:43.634 [2024-11-17 04:23:29.099923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:43.634 [2024-11-17 04:23:29.099933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.634 [2024-11-17 04:23:29.101084] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:43.634 [2024-11-17 04:23:29.102557] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.639 ms, result 0 00:17:43.634 [2024-11-17 04:23:29.104518] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:43.634 Some configs were skipped because the RPC state that can call them passed over. 00:17:43.634 04:23:29 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:43.634 [2024-11-17 04:23:29.346278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.634 [2024-11-17 04:23:29.346342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:43.634 [2024-11-17 04:23:29.346357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.066 ms 00:17:43.634 [2024-11-17 04:23:29.346365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.634 [2024-11-17 04:23:29.346417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.207 ms, result 0 00:17:43.634 true 00:17:43.896 04:23:29 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:43.896 [2024-11-17 04:23:29.561841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.896 [2024-11-17 04:23:29.561901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:43.896 [2024-11-17 04:23:29.561913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.350 ms 00:17:43.896 [2024-11-17 04:23:29.561922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.896 [2024-11-17 04:23:29.561959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.468 ms, result 0 00:17:43.896 true 00:17:43.896 04:23:29 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85557 00:17:43.896 04:23:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85557 ']' 00:17:43.896 04:23:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85557 00:17:43.896 04:23:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:43.896 04:23:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.896 04:23:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85557 00:17:43.896 04:23:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:43.896 killing process with pid 85557 00:17:43.896 04:23:29 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:43.896 04:23:29 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85557' 00:17:43.896 04:23:29 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 85557 00:17:43.896 04:23:29 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 85557 00:17:44.159 [2024-11-17 04:23:29.724775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.159 [2024-11-17 04:23:29.724834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:44.159 [2024-11-17 04:23:29.724847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.159 [2024-11-17 04:23:29.724856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.159 [2024-11-17 04:23:29.724882] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:44.159 [2024-11-17 04:23:29.725412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.159 [2024-11-17 04:23:29.725440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:44.159 [2024-11-17 04:23:29.725451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:17:44.159 [2024-11-17 04:23:29.725461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.159 [2024-11-17 04:23:29.725742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.159 [2024-11-17 04:23:29.725754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:44.159 [2024-11-17 04:23:29.725762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:17:44.159 [2024-11-17 04:23:29.725772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.159 [2024-11-17 04:23:29.730409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.159 [2024-11-17 04:23:29.730445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:44.159 [2024-11-17 04:23:29.730455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.618 ms 00:17:44.159 [2024-11-17 04:23:29.730464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.159 [2024-11-17 04:23:29.737406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.159 [2024-11-17 04:23:29.737443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:44.159 [2024-11-17 04:23:29.737453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.903 ms 00:17:44.159 [2024-11-17 04:23:29.737464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.159 [2024-11-17 04:23:29.739970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.159 [2024-11-17 04:23:29.740012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:44.159 [2024-11-17 04:23:29.740021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.439 ms 00:17:44.159 [2024-11-17 04:23:29.740029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.159 [2024-11-17 04:23:29.743629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.159 [2024-11-17 04:23:29.743672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:44.159 [2024-11-17 04:23:29.743682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.561 ms 00:17:44.159 [2024-11-17 04:23:29.743694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.159 [2024-11-17 04:23:29.743820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.159 [2024-11-17 04:23:29.743831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:44.159 [2024-11-17 04:23:29.743840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:44.159 [2024-11-17 04:23:29.743848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.159 [2024-11-17 04:23:29.746789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.159 [2024-11-17 04:23:29.746830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:44.159 [2024-11-17 04:23:29.746839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.922 ms 00:17:44.159 [2024-11-17 04:23:29.746850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.159 [2024-11-17 04:23:29.749104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.159 [2024-11-17 04:23:29.749146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:44.159 [2024-11-17 04:23:29.749155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.215 ms 00:17:44.159 [2024-11-17 04:23:29.749166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.159 [2024-11-17 04:23:29.751082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.159 [2024-11-17 04:23:29.751126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:44.159 [2024-11-17 04:23:29.751135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.877 ms 00:17:44.159 [2024-11-17 04:23:29.751143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.159 [2024-11-17 04:23:29.753140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.159 [2024-11-17 04:23:29.753180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:44.159 [2024-11-17 04:23:29.753190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.934 ms 00:17:44.159 [2024-11-17 04:23:29.753198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.159 [2024-11-17 04:23:29.753233] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:44.159 [2024-11-17 04:23:29.753249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:44.159 [2024-11-17 04:23:29.753263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:44.159 [2024-11-17 04:23:29.753275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:44.159 [2024-11-17 04:23:29.753283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:44.159 [2024-11-17 04:23:29.753293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:44.159 [2024-11-17 04:23:29.753300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:44.159 [2024-11-17 04:23:29.753309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:44.159 [2024-11-17 04:23:29.753317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:44.159 [2024-11-17 04:23:29.753325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:44.159 [2024-11-17 04:23:29.753333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:44.159 [2024-11-17 04:23:29.753342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.753997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.754006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.754013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.754022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.754030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.754039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.754046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.754056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.754065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.754074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.754082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.754092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.754099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.754110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:44.160 [2024-11-17 04:23:29.754117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:44.161 [2024-11-17 04:23:29.754135] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:44.161 [2024-11-17 04:23:29.754143] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3c15d8b6-41d9-49c5-8e71-5e06c15819a7 00:17:44.161 [2024-11-17 04:23:29.754153] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:44.161 [2024-11-17 04:23:29.754162] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:44.161 [2024-11-17 04:23:29.754171] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:44.161 [2024-11-17 04:23:29.754180] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:44.161 [2024-11-17 04:23:29.754188] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:44.161 [2024-11-17 04:23:29.754199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:44.161 [2024-11-17 04:23:29.754208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:44.161 [2024-11-17 04:23:29.754214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:44.161 [2024-11-17 04:23:29.754223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:44.161 [2024-11-17 04:23:29.754230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.161 [2024-11-17 04:23:29.754238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:44.161 [2024-11-17 04:23:29.754247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:17:44.161 [2024-11-17 04:23:29.754258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.161 [2024-11-17 04:23:29.755889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.161 [2024-11-17 04:23:29.755921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:44.161 [2024-11-17 04:23:29.755932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.600 ms 00:17:44.161 [2024-11-17 04:23:29.755942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.161 [2024-11-17 04:23:29.756036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.161 [2024-11-17 04:23:29.756047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:44.161 [2024-11-17 04:23:29.756056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:44.161 [2024-11-17 04:23:29.756066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.161 [2024-11-17 04:23:29.762144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.161 [2024-11-17 04:23:29.762185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.161 [2024-11-17 04:23:29.762195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.161 [2024-11-17 04:23:29.762205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.161 [2024-11-17 04:23:29.762285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.161 [2024-11-17 04:23:29.762297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.161 [2024-11-17 04:23:29.762306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.161 [2024-11-17 04:23:29.762317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.161 [2024-11-17 04:23:29.762361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.161 [2024-11-17 04:23:29.762386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.161 [2024-11-17 04:23:29.762394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.161 [2024-11-17 04:23:29.762404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.161 [2024-11-17 04:23:29.762422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.161 [2024-11-17 04:23:29.762431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.161 [2024-11-17 04:23:29.762439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.161 [2024-11-17 04:23:29.762448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.161 [2024-11-17 04:23:29.773306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.161 [2024-11-17 04:23:29.773356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.161 [2024-11-17 04:23:29.773366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.161 [2024-11-17 04:23:29.773405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.161 [2024-11-17 04:23:29.781612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.161 [2024-11-17 04:23:29.781661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.161 [2024-11-17 04:23:29.781671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.161 [2024-11-17 04:23:29.781683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.161 [2024-11-17 04:23:29.781726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.161 [2024-11-17 04:23:29.781740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.161 [2024-11-17 04:23:29.781748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.161 [2024-11-17 04:23:29.781758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.161 [2024-11-17 04:23:29.781789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.161 [2024-11-17 04:23:29.781799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.161 [2024-11-17 04:23:29.781807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.161 [2024-11-17 04:23:29.781816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.161 [2024-11-17 04:23:29.781888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.161 [2024-11-17 04:23:29.781899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.161 [2024-11-17 04:23:29.781909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.161 [2024-11-17 04:23:29.781920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.161 [2024-11-17 04:23:29.781956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.161 [2024-11-17 04:23:29.781970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:44.161 [2024-11-17 04:23:29.781979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.161 [2024-11-17 04:23:29.781989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.161 [2024-11-17 04:23:29.782028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.161 [2024-11-17 04:23:29.782038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.161 [2024-11-17 04:23:29.782048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.161 [2024-11-17 04:23:29.782057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.161 [2024-11-17 04:23:29.782100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.161 [2024-11-17 04:23:29.782112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.161 [2024-11-17 04:23:29.782119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.161 [2024-11-17 04:23:29.782128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.161 [2024-11-17 04:23:29.782263] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.464 ms, result 0 00:17:44.422 04:23:29 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:44.422 [2024-11-17 04:23:30.043571] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:44.422 [2024-11-17 04:23:30.043714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85599 ] 00:17:44.683 [2024-11-17 04:23:30.203693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.683 [2024-11-17 04:23:30.235116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.683 [2024-11-17 04:23:30.350038] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.683 [2024-11-17 04:23:30.350112] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.947 [2024-11-17 04:23:30.511487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.947 [2024-11-17 04:23:30.511546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:44.947 [2024-11-17 04:23:30.511562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:44.947 [2024-11-17 04:23:30.511571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.947 [2024-11-17 04:23:30.514170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.947 [2024-11-17 04:23:30.514220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.947 [2024-11-17 04:23:30.514231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:17:44.947 [2024-11-17 04:23:30.514240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.947 [2024-11-17 04:23:30.514353] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:44.947 [2024-11-17 04:23:30.514669] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:44.947 [2024-11-17 04:23:30.514695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.947 [2024-11-17 04:23:30.514707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.947 [2024-11-17 04:23:30.514718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:17:44.947 [2024-11-17 04:23:30.514726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.947 [2024-11-17 04:23:30.516625] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:44.947 [2024-11-17 04:23:30.520488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.947 [2024-11-17 04:23:30.520540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:44.947 [2024-11-17 04:23:30.520558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.865 ms 00:17:44.947 [2024-11-17 04:23:30.520566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.947 [2024-11-17 04:23:30.520670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.947 [2024-11-17 04:23:30.520682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:44.947 [2024-11-17 04:23:30.520692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:44.947 [2024-11-17 04:23:30.520700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.947 [2024-11-17 04:23:30.529111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.947 [2024-11-17 04:23:30.529158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.947 [2024-11-17 04:23:30.529169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.365 ms 00:17:44.947 [2024-11-17 04:23:30.529177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.947 [2024-11-17 04:23:30.529326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.947 [2024-11-17 04:23:30.529342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.947 [2024-11-17 04:23:30.529356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:44.947 [2024-11-17 04:23:30.529363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.947 [2024-11-17 04:23:30.529427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.947 [2024-11-17 04:23:30.529437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:44.947 [2024-11-17 04:23:30.529446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:44.947 [2024-11-17 04:23:30.529457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.947 [2024-11-17 04:23:30.529479] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:44.947 [2024-11-17 04:23:30.531610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.947 [2024-11-17 04:23:30.531646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.947 [2024-11-17 04:23:30.531656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.136 ms 00:17:44.947 [2024-11-17 04:23:30.531664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.947 [2024-11-17 04:23:30.531718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.947 [2024-11-17 04:23:30.531726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:44.947 [2024-11-17 04:23:30.531735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:44.947 [2024-11-17 04:23:30.531743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.947 [2024-11-17 04:23:30.531767] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:44.948 [2024-11-17 04:23:30.531789] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:44.948 [2024-11-17 04:23:30.531826] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:44.948 [2024-11-17 04:23:30.531845] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:44.948 [2024-11-17 04:23:30.531950] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:44.948 [2024-11-17 04:23:30.531961] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:44.948 [2024-11-17 04:23:30.531973] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:44.948 [2024-11-17 04:23:30.531984] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:44.948 [2024-11-17 04:23:30.531992] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:44.948 [2024-11-17 04:23:30.532001] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:44.948 [2024-11-17 04:23:30.532009] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:44.948 [2024-11-17 04:23:30.532017] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:44.948 [2024-11-17 04:23:30.532027] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:44.948 [2024-11-17 04:23:30.532037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.948 [2024-11-17 04:23:30.532045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:44.948 [2024-11-17 04:23:30.532054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:17:44.948 [2024-11-17 04:23:30.532061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.948 [2024-11-17 04:23:30.532152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.948 [2024-11-17 04:23:30.532161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:44.948 [2024-11-17 04:23:30.532168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:44.948 [2024-11-17 04:23:30.532176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.948 [2024-11-17 04:23:30.532301] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:44.948 [2024-11-17 04:23:30.532313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:44.948 [2024-11-17 04:23:30.532328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.948 [2024-11-17 04:23:30.532352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.948 [2024-11-17 04:23:30.532362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:44.948 [2024-11-17 04:23:30.532370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:44.948 [2024-11-17 04:23:30.532395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:44.948 [2024-11-17 04:23:30.532407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:44.948 [2024-11-17 04:23:30.532416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:44.948 [2024-11-17 04:23:30.532424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.948 [2024-11-17 04:23:30.532433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:44.948 [2024-11-17 04:23:30.532441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:44.948 [2024-11-17 04:23:30.532448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.948 [2024-11-17 04:23:30.532456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:44.948 [2024-11-17 04:23:30.532464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:44.948 [2024-11-17 04:23:30.532474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.948 [2024-11-17 04:23:30.532482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:44.948 [2024-11-17 04:23:30.532490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:44.948 [2024-11-17 04:23:30.532497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.948 [2024-11-17 04:23:30.532505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:44.948 [2024-11-17 04:23:30.532513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:44.948 [2024-11-17 04:23:30.532521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.948 [2024-11-17 04:23:30.532529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:44.948 [2024-11-17 04:23:30.532543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:44.948 [2024-11-17 04:23:30.532551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.948 [2024-11-17 04:23:30.532559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:44.948 [2024-11-17 04:23:30.532567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:44.948 [2024-11-17 04:23:30.532575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.948 [2024-11-17 04:23:30.532583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:44.948 [2024-11-17 04:23:30.532591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:44.948 [2024-11-17 04:23:30.532598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.948 [2024-11-17 04:23:30.532606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:44.948 [2024-11-17 04:23:30.532614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:44.948 [2024-11-17 04:23:30.532621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.948 [2024-11-17 04:23:30.532630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:44.948 [2024-11-17 04:23:30.532637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:44.948 [2024-11-17 04:23:30.532643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.948 [2024-11-17 04:23:30.532650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:44.948 [2024-11-17 04:23:30.532657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:44.948 [2024-11-17 04:23:30.532666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.948 [2024-11-17 04:23:30.532674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:44.948 [2024-11-17 04:23:30.532681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:44.948 [2024-11-17 04:23:30.532689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.948 [2024-11-17 04:23:30.532695] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:44.948 [2024-11-17 04:23:30.532710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:44.948 [2024-11-17 04:23:30.532720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.948 [2024-11-17 04:23:30.532728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.948 [2024-11-17 04:23:30.532736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:44.948 [2024-11-17 04:23:30.532743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:44.948 [2024-11-17 04:23:30.532750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:44.948 [2024-11-17 04:23:30.532756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:44.948 [2024-11-17 04:23:30.532763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:44.948 [2024-11-17 04:23:30.532771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:44.948 [2024-11-17 04:23:30.532781] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:44.948 [2024-11-17 04:23:30.532790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.948 [2024-11-17 04:23:30.532800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:44.948 [2024-11-17 04:23:30.532808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:44.948 [2024-11-17 04:23:30.532815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:44.948 [2024-11-17 04:23:30.532822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:44.948 [2024-11-17 04:23:30.532830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:44.948 [2024-11-17 04:23:30.532837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:44.948 [2024-11-17 04:23:30.532845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:44.948 [2024-11-17 04:23:30.532852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:44.948 [2024-11-17 04:23:30.532859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:44.948 [2024-11-17 04:23:30.532866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:44.948 [2024-11-17 04:23:30.532874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:44.948 [2024-11-17 04:23:30.532881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:44.948 [2024-11-17 04:23:30.532888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:44.948 [2024-11-17 04:23:30.532895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:44.948 [2024-11-17 04:23:30.532902] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:44.948 [2024-11-17 04:23:30.532911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.948 [2024-11-17 04:23:30.532924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:44.948 [2024-11-17 04:23:30.532937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:44.948 [2024-11-17 04:23:30.532945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:44.948 [2024-11-17 04:23:30.532952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:44.948 [2024-11-17 04:23:30.532960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.948 [2024-11-17 04:23:30.532968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:44.948 [2024-11-17 04:23:30.532976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:17:44.949 [2024-11-17 04:23:30.532987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.547604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.547652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.949 [2024-11-17 04:23:30.547666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.563 ms 00:17:44.949 [2024-11-17 04:23:30.547684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.547823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.547838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:44.949 [2024-11-17 04:23:30.547847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:44.949 [2024-11-17 04:23:30.547856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.570494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.570551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.949 [2024-11-17 04:23:30.570564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.611 ms 00:17:44.949 [2024-11-17 04:23:30.570572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.570672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.570688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.949 [2024-11-17 04:23:30.570697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.949 [2024-11-17 04:23:30.570710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.571226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.571249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.949 [2024-11-17 04:23:30.571259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:17:44.949 [2024-11-17 04:23:30.571269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.571484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.571497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.949 [2024-11-17 04:23:30.571510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:17:44.949 [2024-11-17 04:23:30.571519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.580527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.580737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.949 [2024-11-17 04:23:30.580771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.980 ms 00:17:44.949 [2024-11-17 04:23:30.580780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.585047] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:44.949 [2024-11-17 04:23:30.585102] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:44.949 [2024-11-17 04:23:30.585115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.585123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:44.949 [2024-11-17 04:23:30.585132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.193 ms 00:17:44.949 [2024-11-17 04:23:30.585139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.601168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.601220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:44.949 [2024-11-17 04:23:30.601233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.948 ms 00:17:44.949 [2024-11-17 04:23:30.601241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.604158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.604353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:44.949 [2024-11-17 04:23:30.604394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.815 ms 00:17:44.949 [2024-11-17 04:23:30.604402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.607182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.607232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:44.949 [2024-11-17 04:23:30.607243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.680 ms 00:17:44.949 [2024-11-17 04:23:30.607251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.607756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.607820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:44.949 [2024-11-17 04:23:30.608003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:17:44.949 [2024-11-17 04:23:30.608015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.632156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.632220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:44.949 [2024-11-17 04:23:30.632235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.105 ms 00:17:44.949 [2024-11-17 04:23:30.632245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.640579] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:44.949 [2024-11-17 04:23:30.660224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.660289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:44.949 [2024-11-17 04:23:30.660303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.856 ms 00:17:44.949 [2024-11-17 04:23:30.660312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.660442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.660456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:44.949 [2024-11-17 04:23:30.660466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:44.949 [2024-11-17 04:23:30.660479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.660539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.660549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.949 [2024-11-17 04:23:30.660559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:44.949 [2024-11-17 04:23:30.660573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.660598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.660607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.949 [2024-11-17 04:23:30.660617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:44.949 [2024-11-17 04:23:30.660629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.660671] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:44.949 [2024-11-17 04:23:30.660683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.660691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:44.949 [2024-11-17 04:23:30.660699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:44.949 [2024-11-17 04:23:30.660708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.667100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.667299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.949 [2024-11-17 04:23:30.667320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.369 ms 00:17:44.949 [2024-11-17 04:23:30.667330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.667449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-17 04:23:30.667462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.949 [2024-11-17 04:23:30.667476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:44.949 [2024-11-17 04:23:30.667484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-17 04:23:30.668562] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.949 [2024-11-17 04:23:30.669975] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.688 ms, result 0 00:17:45.212 [2024-11-17 04:23:30.671269] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:45.212 [2024-11-17 04:23:30.678665] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:46.157  [2024-11-17T04:23:32.829Z] Copying: 14/256 [MB] (14 MBps) [2024-11-17T04:23:33.770Z] Copying: 24/256 [MB] (10 MBps) [2024-11-17T04:23:35.160Z] Copying: 51/256 [MB] (26 MBps) [2024-11-17T04:23:36.103Z] Copying: 62/256 [MB] (11 MBps) [2024-11-17T04:23:37.048Z] Copying: 80/256 [MB] (17 MBps) [2024-11-17T04:23:37.994Z] Copying: 100/256 [MB] (20 MBps) [2024-11-17T04:23:38.942Z] Copying: 119/256 [MB] (18 MBps) [2024-11-17T04:23:39.883Z] Copying: 134/256 [MB] (15 MBps) [2024-11-17T04:23:40.873Z] Copying: 151/256 [MB] (16 MBps) [2024-11-17T04:23:41.829Z] Copying: 165/256 [MB] (13 MBps) [2024-11-17T04:23:42.776Z] Copying: 176/256 [MB] (11 MBps) [2024-11-17T04:23:44.161Z] Copying: 188/256 [MB] (12 MBps) [2024-11-17T04:23:45.105Z] Copying: 211/256 [MB] (22 MBps) [2024-11-17T04:23:46.048Z] Copying: 224/256 [MB] (13 MBps) [2024-11-17T04:23:46.994Z] Copying: 235/256 [MB] (11 MBps) [2024-11-17T04:23:46.994Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-17 04:23:46.779725] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:01.267 [2024-11-17 04:23:46.781686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.267 [2024-11-17 04:23:46.781746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:01.267 [2024-11-17 04:23:46.781762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:01.267 [2024-11-17 04:23:46.781770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.267 [2024-11-17 04:23:46.781796] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:01.267 [2024-11-17 04:23:46.782488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.267 [2024-11-17 04:23:46.782520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:01.267 [2024-11-17 04:23:46.782541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:18:01.267 [2024-11-17 04:23:46.782551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.267 [2024-11-17 04:23:46.782837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.267 [2024-11-17 04:23:46.782855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:01.267 [2024-11-17 04:23:46.782865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:18:01.267 [2024-11-17 04:23:46.782877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.267 [2024-11-17 04:23:46.786599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.267 [2024-11-17 04:23:46.786776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:01.267 [2024-11-17 04:23:46.786796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.704 ms 00:18:01.267 [2024-11-17 04:23:46.786804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.267 [2024-11-17 04:23:46.794382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.267 [2024-11-17 04:23:46.794423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:01.267 [2024-11-17 04:23:46.794445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.539 ms 00:18:01.267 [2024-11-17 04:23:46.794461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.267 [2024-11-17 04:23:46.797341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.267 [2024-11-17 04:23:46.798040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:01.267 [2024-11-17 04:23:46.798072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.802 ms 00:18:01.267 [2024-11-17 04:23:46.798082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.267 [2024-11-17 04:23:46.802684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.267 [2024-11-17 04:23:46.802738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:01.268 [2024-11-17 04:23:46.802750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.570 ms 00:18:01.268 [2024-11-17 04:23:46.802764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.268 [2024-11-17 04:23:46.802900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.268 [2024-11-17 04:23:46.802911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:01.268 [2024-11-17 04:23:46.802924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:01.268 [2024-11-17 04:23:46.802935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.268 [2024-11-17 04:23:46.806003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.268 [2024-11-17 04:23:46.806055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:01.268 [2024-11-17 04:23:46.806066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.049 ms 00:18:01.268 [2024-11-17 04:23:46.806074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.268 [2024-11-17 04:23:46.808730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.268 [2024-11-17 04:23:46.808917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:01.268 [2024-11-17 04:23:46.808936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.629 ms 00:18:01.268 [2024-11-17 04:23:46.808943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.268 [2024-11-17 04:23:46.811674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.268 [2024-11-17 04:23:46.811734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:01.268 [2024-11-17 04:23:46.811747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.700 ms 00:18:01.268 [2024-11-17 04:23:46.811755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.268 [2024-11-17 04:23:46.814052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.268 [2024-11-17 04:23:46.814098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:01.268 [2024-11-17 04:23:46.814108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.234 ms 00:18:01.268 [2024-11-17 04:23:46.814115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.268 [2024-11-17 04:23:46.814139] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:01.268 [2024-11-17 04:23:46.814154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:01.268 [2024-11-17 04:23:46.814729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.814987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:01.269 [2024-11-17 04:23:46.815003] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:01.269 [2024-11-17 04:23:46.815011] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3c15d8b6-41d9-49c5-8e71-5e06c15819a7 00:18:01.269 [2024-11-17 04:23:46.815028] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:01.269 [2024-11-17 04:23:46.815036] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:01.269 [2024-11-17 04:23:46.815044] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:01.269 [2024-11-17 04:23:46.815052] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:01.269 [2024-11-17 04:23:46.815060] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:01.269 [2024-11-17 04:23:46.815069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:01.269 [2024-11-17 04:23:46.815076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:01.269 [2024-11-17 04:23:46.815083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:01.269 [2024-11-17 04:23:46.815091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:01.269 [2024-11-17 04:23:46.815099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.269 [2024-11-17 04:23:46.815110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:01.269 [2024-11-17 04:23:46.815123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:18:01.269 [2024-11-17 04:23:46.815131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.269 [2024-11-17 04:23:46.817443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.269 [2024-11-17 04:23:46.817481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:01.269 [2024-11-17 04:23:46.817491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.268 ms 00:18:01.269 [2024-11-17 04:23:46.817499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.269 [2024-11-17 04:23:46.817617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.269 [2024-11-17 04:23:46.817626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:01.269 [2024-11-17 04:23:46.817635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:18:01.269 [2024-11-17 04:23:46.817643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.269 [2024-11-17 04:23:46.825482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.269 [2024-11-17 04:23:46.825661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:01.269 [2024-11-17 04:23:46.825679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.269 [2024-11-17 04:23:46.825688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.269 [2024-11-17 04:23:46.825783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.269 [2024-11-17 04:23:46.825793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:01.269 [2024-11-17 04:23:46.825801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.269 [2024-11-17 04:23:46.825809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.269 [2024-11-17 04:23:46.825856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.269 [2024-11-17 04:23:46.825866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:01.269 [2024-11-17 04:23:46.825874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.269 [2024-11-17 04:23:46.825882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.269 [2024-11-17 04:23:46.825902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.269 [2024-11-17 04:23:46.825910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:01.269 [2024-11-17 04:23:46.825918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.269 [2024-11-17 04:23:46.825926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.269 [2024-11-17 04:23:46.839489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.269 [2024-11-17 04:23:46.839540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:01.269 [2024-11-17 04:23:46.839551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.269 [2024-11-17 04:23:46.839561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.269 [2024-11-17 04:23:46.850630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.269 [2024-11-17 04:23:46.850681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:01.269 [2024-11-17 04:23:46.850693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.269 [2024-11-17 04:23:46.850712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.269 [2024-11-17 04:23:46.850765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.269 [2024-11-17 04:23:46.850775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:01.269 [2024-11-17 04:23:46.850784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.269 [2024-11-17 04:23:46.850792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.269 [2024-11-17 04:23:46.850825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.269 [2024-11-17 04:23:46.850838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:01.269 [2024-11-17 04:23:46.850847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.269 [2024-11-17 04:23:46.850856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.269 [2024-11-17 04:23:46.850934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.269 [2024-11-17 04:23:46.850944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:01.269 [2024-11-17 04:23:46.850954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.269 [2024-11-17 04:23:46.850962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.269 [2024-11-17 04:23:46.850992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.269 [2024-11-17 04:23:46.851005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:01.269 [2024-11-17 04:23:46.851016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.269 [2024-11-17 04:23:46.851024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.269 [2024-11-17 04:23:46.851070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.269 [2024-11-17 04:23:46.851081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:01.269 [2024-11-17 04:23:46.851089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.269 [2024-11-17 04:23:46.851101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.269 [2024-11-17 04:23:46.851152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.269 [2024-11-17 04:23:46.851164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:01.269 [2024-11-17 04:23:46.851175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.269 [2024-11-17 04:23:46.851183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.269 [2024-11-17 04:23:46.851336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.626 ms, result 0 00:18:01.530 00:18:01.531 00:18:01.531 04:23:47 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:02.102 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:02.102 04:23:47 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:02.102 04:23:47 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:02.102 04:23:47 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:02.102 04:23:47 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:02.102 04:23:47 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:02.102 04:23:47 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:02.102 Process with pid 85557 is not found 00:18:02.102 04:23:47 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85557 00:18:02.102 04:23:47 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85557 ']' 00:18:02.103 04:23:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85557 00:18:02.103 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85557) - No such process 00:18:02.103 04:23:47 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 85557 is not found' 00:18:02.103 ************************************ 00:18:02.103 END TEST ftl_trim 00:18:02.103 ************************************ 00:18:02.103 00:18:02.103 real 1m15.062s 00:18:02.103 user 1m26.992s 00:18:02.103 sys 0m15.870s 00:18:02.103 04:23:47 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.103 04:23:47 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:02.103 04:23:47 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:02.103 04:23:47 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:02.103 04:23:47 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.103 04:23:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:02.103 ************************************ 00:18:02.103 START TEST ftl_restore 00:18:02.103 ************************************ 00:18:02.103 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:02.365 * Looking for test storage... 00:18:02.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:02.365 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:02.365 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:18:02.365 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:02.365 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.365 04:23:47 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:02.365 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.365 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:02.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.365 --rc genhtml_branch_coverage=1 00:18:02.365 --rc genhtml_function_coverage=1 00:18:02.365 --rc genhtml_legend=1 00:18:02.365 --rc geninfo_all_blocks=1 00:18:02.365 --rc geninfo_unexecuted_blocks=1 00:18:02.365 00:18:02.365 ' 00:18:02.365 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:02.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.365 --rc genhtml_branch_coverage=1 00:18:02.365 --rc genhtml_function_coverage=1 00:18:02.365 --rc genhtml_legend=1 00:18:02.365 --rc geninfo_all_blocks=1 00:18:02.365 --rc geninfo_unexecuted_blocks=1 00:18:02.365 00:18:02.365 ' 00:18:02.365 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:02.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.365 --rc genhtml_branch_coverage=1 00:18:02.365 --rc genhtml_function_coverage=1 00:18:02.365 --rc genhtml_legend=1 00:18:02.365 --rc geninfo_all_blocks=1 00:18:02.365 --rc geninfo_unexecuted_blocks=1 00:18:02.365 00:18:02.365 ' 00:18:02.365 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:02.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.365 --rc genhtml_branch_coverage=1 00:18:02.365 --rc genhtml_function_coverage=1 00:18:02.365 --rc genhtml_legend=1 00:18:02.365 --rc geninfo_all_blocks=1 00:18:02.365 --rc geninfo_unexecuted_blocks=1 00:18:02.365 00:18:02.365 ' 00:18:02.365 04:23:47 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:02.365 04:23:47 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:02.365 04:23:47 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:02.365 04:23:47 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:02.365 04:23:47 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.mYrolRQ1P0 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=85851 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 85851 00:18:02.366 04:23:47 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.366 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 85851 ']' 00:18:02.366 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.366 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.366 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.366 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.366 04:23:47 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:02.366 [2024-11-17 04:23:48.018952] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:18:02.366 [2024-11-17 04:23:48.019234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85851 ] 00:18:02.628 [2024-11-17 04:23:48.176558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.628 [2024-11-17 04:23:48.215474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.201 04:23:48 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.201 04:23:48 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:18:03.201 04:23:48 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:03.201 04:23:48 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:03.201 04:23:48 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:03.201 04:23:48 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:03.201 04:23:48 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:03.201 04:23:48 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:03.463 04:23:49 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:03.463 04:23:49 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:03.463 04:23:49 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:03.463 04:23:49 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:03.463 04:23:49 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:03.463 04:23:49 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:03.463 04:23:49 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:03.463 04:23:49 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:03.723 04:23:49 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:03.723 { 00:18:03.723 "name": "nvme0n1", 00:18:03.723 "aliases": [ 00:18:03.723 "915b715f-5cef-424e-8826-ca0f22077874" 00:18:03.723 ], 00:18:03.723 "product_name": "NVMe disk", 00:18:03.723 "block_size": 4096, 00:18:03.723 "num_blocks": 1310720, 00:18:03.723 "uuid": "915b715f-5cef-424e-8826-ca0f22077874", 00:18:03.723 "numa_id": -1, 00:18:03.723 "assigned_rate_limits": { 00:18:03.723 "rw_ios_per_sec": 0, 00:18:03.723 "rw_mbytes_per_sec": 0, 00:18:03.723 "r_mbytes_per_sec": 0, 00:18:03.723 "w_mbytes_per_sec": 0 00:18:03.723 }, 00:18:03.723 "claimed": true, 00:18:03.723 "claim_type": "read_many_write_one", 00:18:03.723 "zoned": false, 00:18:03.723 "supported_io_types": { 00:18:03.723 "read": true, 00:18:03.723 "write": true, 00:18:03.723 "unmap": true, 00:18:03.723 "flush": true, 00:18:03.723 "reset": true, 00:18:03.723 "nvme_admin": true, 00:18:03.723 "nvme_io": true, 00:18:03.723 "nvme_io_md": false, 00:18:03.723 "write_zeroes": true, 00:18:03.723 "zcopy": false, 00:18:03.723 "get_zone_info": false, 00:18:03.723 "zone_management": false, 00:18:03.723 "zone_append": false, 00:18:03.723 "compare": true, 00:18:03.723 "compare_and_write": false, 00:18:03.723 "abort": true, 00:18:03.723 "seek_hole": false, 00:18:03.723 "seek_data": false, 00:18:03.723 "copy": true, 00:18:03.723 "nvme_iov_md": false 00:18:03.723 }, 00:18:03.723 "driver_specific": { 00:18:03.723 "nvme": [ 00:18:03.723 { 00:18:03.723 "pci_address": "0000:00:11.0", 00:18:03.723 "trid": { 00:18:03.723 "trtype": "PCIe", 00:18:03.723 "traddr": "0000:00:11.0" 00:18:03.723 }, 00:18:03.723 "ctrlr_data": { 00:18:03.723 "cntlid": 0, 00:18:03.723 "vendor_id": "0x1b36", 00:18:03.723 "model_number": "QEMU NVMe Ctrl", 00:18:03.723 "serial_number": "12341", 00:18:03.723 "firmware_revision": "8.0.0", 00:18:03.723 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:03.723 "oacs": { 00:18:03.723 "security": 0, 00:18:03.723 "format": 1, 00:18:03.723 "firmware": 0, 00:18:03.723 "ns_manage": 1 00:18:03.723 }, 00:18:03.723 "multi_ctrlr": false, 00:18:03.723 "ana_reporting": false 00:18:03.723 }, 00:18:03.723 "vs": { 00:18:03.723 "nvme_version": "1.4" 00:18:03.723 }, 00:18:03.723 "ns_data": { 00:18:03.723 "id": 1, 00:18:03.723 "can_share": false 00:18:03.723 } 00:18:03.723 } 00:18:03.723 ], 00:18:03.723 "mp_policy": "active_passive" 00:18:03.723 } 00:18:03.723 } 00:18:03.723 ]' 00:18:03.723 04:23:49 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:03.723 04:23:49 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:03.723 04:23:49 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:03.985 04:23:49 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:03.985 04:23:49 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:03.985 04:23:49 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:18:03.985 04:23:49 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:03.985 04:23:49 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:03.985 04:23:49 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:03.985 04:23:49 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:03.985 04:23:49 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:03.985 04:23:49 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=35847265-ae0a-489b-bac3-c3d989fc0818 00:18:03.985 04:23:49 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:03.985 04:23:49 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 35847265-ae0a-489b-bac3-c3d989fc0818 00:18:04.245 04:23:49 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:04.505 04:23:50 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=9649f2db-73b9-402e-a314-574cae5aa724 00:18:04.505 04:23:50 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9649f2db-73b9-402e-a314-574cae5aa724 00:18:04.764 04:23:50 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=45bbd20a-a7f7-414e-8949-50743556a0d3 00:18:04.764 04:23:50 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:04.764 04:23:50 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 45bbd20a-a7f7-414e-8949-50743556a0d3 00:18:04.764 04:23:50 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:04.764 04:23:50 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:04.764 04:23:50 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=45bbd20a-a7f7-414e-8949-50743556a0d3 00:18:04.764 04:23:50 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:04.764 04:23:50 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 45bbd20a-a7f7-414e-8949-50743556a0d3 00:18:04.764 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=45bbd20a-a7f7-414e-8949-50743556a0d3 00:18:04.764 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:04.764 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:04.764 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:04.764 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 45bbd20a-a7f7-414e-8949-50743556a0d3 00:18:05.023 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:05.023 { 00:18:05.023 "name": "45bbd20a-a7f7-414e-8949-50743556a0d3", 00:18:05.023 "aliases": [ 00:18:05.023 "lvs/nvme0n1p0" 00:18:05.023 ], 00:18:05.023 "product_name": "Logical Volume", 00:18:05.023 "block_size": 4096, 00:18:05.023 "num_blocks": 26476544, 00:18:05.023 "uuid": "45bbd20a-a7f7-414e-8949-50743556a0d3", 00:18:05.023 "assigned_rate_limits": { 00:18:05.023 "rw_ios_per_sec": 0, 00:18:05.023 "rw_mbytes_per_sec": 0, 00:18:05.023 "r_mbytes_per_sec": 0, 00:18:05.023 "w_mbytes_per_sec": 0 00:18:05.023 }, 00:18:05.023 "claimed": false, 00:18:05.023 "zoned": false, 00:18:05.023 "supported_io_types": { 00:18:05.023 "read": true, 00:18:05.023 "write": true, 00:18:05.023 "unmap": true, 00:18:05.023 "flush": false, 00:18:05.023 "reset": true, 00:18:05.023 "nvme_admin": false, 00:18:05.023 "nvme_io": false, 00:18:05.023 "nvme_io_md": false, 00:18:05.023 "write_zeroes": true, 00:18:05.023 "zcopy": false, 00:18:05.023 "get_zone_info": false, 00:18:05.023 "zone_management": false, 00:18:05.023 "zone_append": false, 00:18:05.023 "compare": false, 00:18:05.023 "compare_and_write": false, 00:18:05.023 "abort": false, 00:18:05.023 "seek_hole": true, 00:18:05.023 "seek_data": true, 00:18:05.023 "copy": false, 00:18:05.023 "nvme_iov_md": false 00:18:05.023 }, 00:18:05.023 "driver_specific": { 00:18:05.023 "lvol": { 00:18:05.023 "lvol_store_uuid": "9649f2db-73b9-402e-a314-574cae5aa724", 00:18:05.023 "base_bdev": "nvme0n1", 00:18:05.023 "thin_provision": true, 00:18:05.023 "num_allocated_clusters": 0, 00:18:05.023 "snapshot": false, 00:18:05.023 "clone": false, 00:18:05.023 "esnap_clone": false 00:18:05.023 } 00:18:05.023 } 00:18:05.023 } 00:18:05.023 ]' 00:18:05.023 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:05.023 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:05.023 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:05.023 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:05.023 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:05.023 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:05.023 04:23:50 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:05.023 04:23:50 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:05.023 04:23:50 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:05.284 04:23:50 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:05.284 04:23:50 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:05.284 04:23:50 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 45bbd20a-a7f7-414e-8949-50743556a0d3 00:18:05.284 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=45bbd20a-a7f7-414e-8949-50743556a0d3 00:18:05.284 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:05.284 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:05.284 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:05.284 04:23:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 45bbd20a-a7f7-414e-8949-50743556a0d3 00:18:05.545 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:05.545 { 00:18:05.545 "name": "45bbd20a-a7f7-414e-8949-50743556a0d3", 00:18:05.545 "aliases": [ 00:18:05.545 "lvs/nvme0n1p0" 00:18:05.545 ], 00:18:05.545 "product_name": "Logical Volume", 00:18:05.545 "block_size": 4096, 00:18:05.545 "num_blocks": 26476544, 00:18:05.545 "uuid": "45bbd20a-a7f7-414e-8949-50743556a0d3", 00:18:05.545 "assigned_rate_limits": { 00:18:05.545 "rw_ios_per_sec": 0, 00:18:05.545 "rw_mbytes_per_sec": 0, 00:18:05.545 "r_mbytes_per_sec": 0, 00:18:05.545 "w_mbytes_per_sec": 0 00:18:05.545 }, 00:18:05.545 "claimed": false, 00:18:05.545 "zoned": false, 00:18:05.545 "supported_io_types": { 00:18:05.545 "read": true, 00:18:05.545 "write": true, 00:18:05.545 "unmap": true, 00:18:05.545 "flush": false, 00:18:05.545 "reset": true, 00:18:05.545 "nvme_admin": false, 00:18:05.545 "nvme_io": false, 00:18:05.545 "nvme_io_md": false, 00:18:05.545 "write_zeroes": true, 00:18:05.545 "zcopy": false, 00:18:05.545 "get_zone_info": false, 00:18:05.545 "zone_management": false, 00:18:05.545 "zone_append": false, 00:18:05.545 "compare": false, 00:18:05.545 "compare_and_write": false, 00:18:05.545 "abort": false, 00:18:05.545 "seek_hole": true, 00:18:05.545 "seek_data": true, 00:18:05.545 "copy": false, 00:18:05.545 "nvme_iov_md": false 00:18:05.545 }, 00:18:05.545 "driver_specific": { 00:18:05.545 "lvol": { 00:18:05.545 "lvol_store_uuid": "9649f2db-73b9-402e-a314-574cae5aa724", 00:18:05.545 "base_bdev": "nvme0n1", 00:18:05.545 "thin_provision": true, 00:18:05.545 "num_allocated_clusters": 0, 00:18:05.545 "snapshot": false, 00:18:05.545 "clone": false, 00:18:05.545 "esnap_clone": false 00:18:05.545 } 00:18:05.545 } 00:18:05.545 } 00:18:05.545 ]' 00:18:05.545 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:05.545 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:05.545 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:05.545 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:05.545 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:05.545 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:05.545 04:23:51 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:05.545 04:23:51 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:05.807 04:23:51 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:05.807 04:23:51 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 45bbd20a-a7f7-414e-8949-50743556a0d3 00:18:05.807 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=45bbd20a-a7f7-414e-8949-50743556a0d3 00:18:05.807 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:05.807 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:05.807 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:05.807 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 45bbd20a-a7f7-414e-8949-50743556a0d3 00:18:06.069 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:06.069 { 00:18:06.069 "name": "45bbd20a-a7f7-414e-8949-50743556a0d3", 00:18:06.069 "aliases": [ 00:18:06.069 "lvs/nvme0n1p0" 00:18:06.069 ], 00:18:06.069 "product_name": "Logical Volume", 00:18:06.069 "block_size": 4096, 00:18:06.069 "num_blocks": 26476544, 00:18:06.069 "uuid": "45bbd20a-a7f7-414e-8949-50743556a0d3", 00:18:06.069 "assigned_rate_limits": { 00:18:06.069 "rw_ios_per_sec": 0, 00:18:06.069 "rw_mbytes_per_sec": 0, 00:18:06.069 "r_mbytes_per_sec": 0, 00:18:06.069 "w_mbytes_per_sec": 0 00:18:06.069 }, 00:18:06.069 "claimed": false, 00:18:06.069 "zoned": false, 00:18:06.069 "supported_io_types": { 00:18:06.069 "read": true, 00:18:06.069 "write": true, 00:18:06.069 "unmap": true, 00:18:06.069 "flush": false, 00:18:06.069 "reset": true, 00:18:06.069 "nvme_admin": false, 00:18:06.069 "nvme_io": false, 00:18:06.069 "nvme_io_md": false, 00:18:06.069 "write_zeroes": true, 00:18:06.069 "zcopy": false, 00:18:06.069 "get_zone_info": false, 00:18:06.069 "zone_management": false, 00:18:06.069 "zone_append": false, 00:18:06.069 "compare": false, 00:18:06.069 "compare_and_write": false, 00:18:06.069 "abort": false, 00:18:06.069 "seek_hole": true, 00:18:06.069 "seek_data": true, 00:18:06.069 "copy": false, 00:18:06.069 "nvme_iov_md": false 00:18:06.069 }, 00:18:06.069 "driver_specific": { 00:18:06.069 "lvol": { 00:18:06.069 "lvol_store_uuid": "9649f2db-73b9-402e-a314-574cae5aa724", 00:18:06.069 "base_bdev": "nvme0n1", 00:18:06.069 "thin_provision": true, 00:18:06.069 "num_allocated_clusters": 0, 00:18:06.069 "snapshot": false, 00:18:06.069 "clone": false, 00:18:06.069 "esnap_clone": false 00:18:06.069 } 00:18:06.069 } 00:18:06.069 } 00:18:06.069 ]' 00:18:06.069 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:06.069 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:06.069 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:06.069 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:06.069 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:06.069 04:23:51 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:06.069 04:23:51 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:06.069 04:23:51 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 45bbd20a-a7f7-414e-8949-50743556a0d3 --l2p_dram_limit 10' 00:18:06.069 04:23:51 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:06.069 04:23:51 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:06.069 04:23:51 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:06.069 04:23:51 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:06.069 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:06.069 04:23:51 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 45bbd20a-a7f7-414e-8949-50743556a0d3 --l2p_dram_limit 10 -c nvc0n1p0 00:18:06.331 [2024-11-17 04:23:51.899523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.331 [2024-11-17 04:23:51.899569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:06.331 [2024-11-17 04:23:51.899580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:06.331 [2024-11-17 04:23:51.899588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.331 [2024-11-17 04:23:51.899634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.331 [2024-11-17 04:23:51.899643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:06.331 [2024-11-17 04:23:51.899651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:06.331 [2024-11-17 04:23:51.899660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.331 [2024-11-17 04:23:51.899678] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:06.331 [2024-11-17 04:23:51.899957] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:06.331 [2024-11-17 04:23:51.899981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.331 [2024-11-17 04:23:51.899989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:06.331 [2024-11-17 04:23:51.899997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:18:06.331 [2024-11-17 04:23:51.900003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.331 [2024-11-17 04:23:51.900034] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 56f46c39-64c9-4241-8388-8d11dc80ea47 00:18:06.331 [2024-11-17 04:23:51.901105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.331 [2024-11-17 04:23:51.901135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:06.331 [2024-11-17 04:23:51.901147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:06.331 [2024-11-17 04:23:51.901153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.331 [2024-11-17 04:23:51.906442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.331 [2024-11-17 04:23:51.906469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:06.331 [2024-11-17 04:23:51.906480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.230 ms 00:18:06.331 [2024-11-17 04:23:51.906486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.331 [2024-11-17 04:23:51.906549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.331 [2024-11-17 04:23:51.906558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:06.331 [2024-11-17 04:23:51.906570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:06.331 [2024-11-17 04:23:51.906578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.331 [2024-11-17 04:23:51.906623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.331 [2024-11-17 04:23:51.906631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:06.331 [2024-11-17 04:23:51.906638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:06.331 [2024-11-17 04:23:51.906644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.331 [2024-11-17 04:23:51.906661] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:06.331 [2024-11-17 04:23:51.908063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.331 [2024-11-17 04:23:51.908091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:06.331 [2024-11-17 04:23:51.908098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:18:06.331 [2024-11-17 04:23:51.908105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.331 [2024-11-17 04:23:51.908130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.331 [2024-11-17 04:23:51.908138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:06.331 [2024-11-17 04:23:51.908145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:06.331 [2024-11-17 04:23:51.908153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.331 [2024-11-17 04:23:51.908166] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:06.331 [2024-11-17 04:23:51.908281] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:06.331 [2024-11-17 04:23:51.908290] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:06.331 [2024-11-17 04:23:51.908304] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:06.331 [2024-11-17 04:23:51.908312] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:06.331 [2024-11-17 04:23:51.908325] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:06.331 [2024-11-17 04:23:51.908331] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:06.331 [2024-11-17 04:23:51.908341] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:06.331 [2024-11-17 04:23:51.908347] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:06.331 [2024-11-17 04:23:51.908354] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:06.331 [2024-11-17 04:23:51.908360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.332 [2024-11-17 04:23:51.908367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:06.332 [2024-11-17 04:23:51.908394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:18:06.332 [2024-11-17 04:23:51.908402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.332 [2024-11-17 04:23:51.908466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.332 [2024-11-17 04:23:51.908476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:06.332 [2024-11-17 04:23:51.908481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:06.332 [2024-11-17 04:23:51.908488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.332 [2024-11-17 04:23:51.908564] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:06.332 [2024-11-17 04:23:51.908574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:06.332 [2024-11-17 04:23:51.908580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.332 [2024-11-17 04:23:51.908587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.332 [2024-11-17 04:23:51.908593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:06.332 [2024-11-17 04:23:51.908600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:06.332 [2024-11-17 04:23:51.908605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:06.332 [2024-11-17 04:23:51.908612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:06.332 [2024-11-17 04:23:51.908618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:06.332 [2024-11-17 04:23:51.908624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.332 [2024-11-17 04:23:51.908628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:06.332 [2024-11-17 04:23:51.908636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:06.332 [2024-11-17 04:23:51.908640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.332 [2024-11-17 04:23:51.908649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:06.332 [2024-11-17 04:23:51.908654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:06.332 [2024-11-17 04:23:51.908660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.332 [2024-11-17 04:23:51.908665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:06.332 [2024-11-17 04:23:51.908672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:06.332 [2024-11-17 04:23:51.908676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.332 [2024-11-17 04:23:51.908686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:06.332 [2024-11-17 04:23:51.908691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:06.332 [2024-11-17 04:23:51.908697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.332 [2024-11-17 04:23:51.908702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:06.332 [2024-11-17 04:23:51.908709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:06.332 [2024-11-17 04:23:51.908714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.332 [2024-11-17 04:23:51.908721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:06.332 [2024-11-17 04:23:51.908727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:06.332 [2024-11-17 04:23:51.908734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.332 [2024-11-17 04:23:51.908740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:06.332 [2024-11-17 04:23:51.908749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:06.332 [2024-11-17 04:23:51.908755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.332 [2024-11-17 04:23:51.908762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:06.332 [2024-11-17 04:23:51.908767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:06.332 [2024-11-17 04:23:51.908774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.332 [2024-11-17 04:23:51.908780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:06.332 [2024-11-17 04:23:51.908787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:06.332 [2024-11-17 04:23:51.908793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.332 [2024-11-17 04:23:51.908801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:06.332 [2024-11-17 04:23:51.908807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:06.332 [2024-11-17 04:23:51.908814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.332 [2024-11-17 04:23:51.908819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:06.332 [2024-11-17 04:23:51.908826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:06.332 [2024-11-17 04:23:51.908831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.332 [2024-11-17 04:23:51.908838] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:06.332 [2024-11-17 04:23:51.908844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:06.332 [2024-11-17 04:23:51.908854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.332 [2024-11-17 04:23:51.908860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.332 [2024-11-17 04:23:51.908868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:06.332 [2024-11-17 04:23:51.908874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:06.332 [2024-11-17 04:23:51.908881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:06.332 [2024-11-17 04:23:51.908888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:06.332 [2024-11-17 04:23:51.908895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:06.332 [2024-11-17 04:23:51.908901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:06.332 [2024-11-17 04:23:51.908911] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:06.332 [2024-11-17 04:23:51.908927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.332 [2024-11-17 04:23:51.908936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:06.332 [2024-11-17 04:23:51.908942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:06.332 [2024-11-17 04:23:51.908949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:06.332 [2024-11-17 04:23:51.908956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:06.332 [2024-11-17 04:23:51.908964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:06.332 [2024-11-17 04:23:51.908970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:06.332 [2024-11-17 04:23:51.908980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:06.332 [2024-11-17 04:23:51.908987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:06.332 [2024-11-17 04:23:51.908994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:06.332 [2024-11-17 04:23:51.909000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:06.332 [2024-11-17 04:23:51.909008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:06.332 [2024-11-17 04:23:51.909014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:06.332 [2024-11-17 04:23:51.909022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:06.332 [2024-11-17 04:23:51.909028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:06.332 [2024-11-17 04:23:51.909035] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:06.332 [2024-11-17 04:23:51.909042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.332 [2024-11-17 04:23:51.909050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:06.332 [2024-11-17 04:23:51.909057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:06.332 [2024-11-17 04:23:51.909064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:06.332 [2024-11-17 04:23:51.909070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:06.332 [2024-11-17 04:23:51.909077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.332 [2024-11-17 04:23:51.909084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:06.332 [2024-11-17 04:23:51.909093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:18:06.332 [2024-11-17 04:23:51.909098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.332 [2024-11-17 04:23:51.909127] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:06.332 [2024-11-17 04:23:51.909135] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:10.537 [2024-11-17 04:23:55.513911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.514008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:10.537 [2024-11-17 04:23:55.514038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3604.757 ms 00:18:10.537 [2024-11-17 04:23:55.514049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.534069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.534136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.537 [2024-11-17 04:23:55.534156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.866 ms 00:18:10.537 [2024-11-17 04:23:55.534166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.534295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.534305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:10.537 [2024-11-17 04:23:55.534317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:10.537 [2024-11-17 04:23:55.534327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.552307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.552369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.537 [2024-11-17 04:23:55.552404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.876 ms 00:18:10.537 [2024-11-17 04:23:55.552417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.552467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.552476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.537 [2024-11-17 04:23:55.552489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:10.537 [2024-11-17 04:23:55.552497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.553270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.553313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.537 [2024-11-17 04:23:55.553328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:18:10.537 [2024-11-17 04:23:55.553337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.553486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.553505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.537 [2024-11-17 04:23:55.553519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:18:10.537 [2024-11-17 04:23:55.553529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.565881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.565934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.537 [2024-11-17 04:23:55.565950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.323 ms 00:18:10.537 [2024-11-17 04:23:55.565960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.577556] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:10.537 [2024-11-17 04:23:55.582662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.582717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:10.537 [2024-11-17 04:23:55.582731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.599 ms 00:18:10.537 [2024-11-17 04:23:55.582743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.676347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.676437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:10.537 [2024-11-17 04:23:55.676458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.567 ms 00:18:10.537 [2024-11-17 04:23:55.676474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.676675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.676692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:10.537 [2024-11-17 04:23:55.676703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:18:10.537 [2024-11-17 04:23:55.676720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.683836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.683901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:10.537 [2024-11-17 04:23:55.683914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.056 ms 00:18:10.537 [2024-11-17 04:23:55.683929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.689735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.689796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:10.537 [2024-11-17 04:23:55.689808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.777 ms 00:18:10.537 [2024-11-17 04:23:55.689820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.690147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.690164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:10.537 [2024-11-17 04:23:55.690173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:18:10.537 [2024-11-17 04:23:55.690187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.740212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.740289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:10.537 [2024-11-17 04:23:55.740304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.981 ms 00:18:10.537 [2024-11-17 04:23:55.740320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.748395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.748453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:10.537 [2024-11-17 04:23:55.748466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.937 ms 00:18:10.537 [2024-11-17 04:23:55.748479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.755020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.755079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:10.537 [2024-11-17 04:23:55.755090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.488 ms 00:18:10.537 [2024-11-17 04:23:55.755101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.537 [2024-11-17 04:23:55.764617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.537 [2024-11-17 04:23:55.764724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:10.538 [2024-11-17 04:23:55.764754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.459 ms 00:18:10.538 [2024-11-17 04:23:55.764782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.538 [2024-11-17 04:23:55.764883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.538 [2024-11-17 04:23:55.764913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:10.538 [2024-11-17 04:23:55.764936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:10.538 [2024-11-17 04:23:55.764972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.538 [2024-11-17 04:23:55.765129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.538 [2024-11-17 04:23:55.765158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:10.538 [2024-11-17 04:23:55.765181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:10.538 [2024-11-17 04:23:55.765206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.538 [2024-11-17 04:23:55.767467] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3867.184 ms, result 0 00:18:10.538 { 00:18:10.538 "name": "ftl0", 00:18:10.538 "uuid": "56f46c39-64c9-4241-8388-8d11dc80ea47" 00:18:10.538 } 00:18:10.538 04:23:55 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:10.538 04:23:55 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:10.538 04:23:56 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:10.538 04:23:56 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:10.538 [2024-11-17 04:23:56.211368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.538 [2024-11-17 04:23:56.211447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:10.538 [2024-11-17 04:23:56.211473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:10.538 [2024-11-17 04:23:56.211488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.538 [2024-11-17 04:23:56.211527] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:10.538 [2024-11-17 04:23:56.212415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.538 [2024-11-17 04:23:56.212477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:10.538 [2024-11-17 04:23:56.212493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:18:10.538 [2024-11-17 04:23:56.212508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.538 [2024-11-17 04:23:56.212902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.538 [2024-11-17 04:23:56.212929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:10.538 [2024-11-17 04:23:56.212944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:18:10.538 [2024-11-17 04:23:56.212969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.538 [2024-11-17 04:23:56.216817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.538 [2024-11-17 04:23:56.216868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:10.538 [2024-11-17 04:23:56.216883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.823 ms 00:18:10.538 [2024-11-17 04:23:56.216898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.538 [2024-11-17 04:23:56.223274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.538 [2024-11-17 04:23:56.223334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:10.538 [2024-11-17 04:23:56.223350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.337 ms 00:18:10.538 [2024-11-17 04:23:56.223364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.538 [2024-11-17 04:23:56.226277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.538 [2024-11-17 04:23:56.226351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:10.538 [2024-11-17 04:23:56.226366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.779 ms 00:18:10.538 [2024-11-17 04:23:56.226399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.538 [2024-11-17 04:23:56.232080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.538 [2024-11-17 04:23:56.232152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:10.538 [2024-11-17 04:23:56.232168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.617 ms 00:18:10.538 [2024-11-17 04:23:56.232182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.538 [2024-11-17 04:23:56.232395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.538 [2024-11-17 04:23:56.232420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:10.538 [2024-11-17 04:23:56.232440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:18:10.538 [2024-11-17 04:23:56.232456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.538 [2024-11-17 04:23:56.235787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.538 [2024-11-17 04:23:56.235852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:10.538 [2024-11-17 04:23:56.235867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.303 ms 00:18:10.538 [2024-11-17 04:23:56.235880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.538 [2024-11-17 04:23:56.238768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.538 [2024-11-17 04:23:56.238841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:10.538 [2024-11-17 04:23:56.238855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.827 ms 00:18:10.538 [2024-11-17 04:23:56.238868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.538 [2024-11-17 04:23:56.241174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.538 [2024-11-17 04:23:56.241240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:10.538 [2024-11-17 04:23:56.241254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:18:10.538 [2024-11-17 04:23:56.241267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.538 [2024-11-17 04:23:56.243585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.538 [2024-11-17 04:23:56.243651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:10.538 [2024-11-17 04:23:56.243664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.220 ms 00:18:10.538 [2024-11-17 04:23:56.243682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.538 [2024-11-17 04:23:56.243739] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:10.538 [2024-11-17 04:23:56.243763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.243998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.244010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.244025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.244038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.244056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.244069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.244083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.244096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:10.538 [2024-11-17 04:23:56.244114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.244989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.245006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.245018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.245037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.245050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.245066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.245079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.245094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.245108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.245124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.245136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.245152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.245164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:10.539 [2024-11-17 04:23:56.245181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:10.540 [2024-11-17 04:23:56.245194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:10.540 [2024-11-17 04:23:56.245210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:10.540 [2024-11-17 04:23:56.245223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:10.540 [2024-11-17 04:23:56.245239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:10.540 [2024-11-17 04:23:56.245253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:10.540 [2024-11-17 04:23:56.245283] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:10.540 [2024-11-17 04:23:56.245297] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 56f46c39-64c9-4241-8388-8d11dc80ea47 00:18:10.540 [2024-11-17 04:23:56.245317] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:10.540 [2024-11-17 04:23:56.245329] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:10.540 [2024-11-17 04:23:56.245345] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:10.540 [2024-11-17 04:23:56.245358] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:10.540 [2024-11-17 04:23:56.245398] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:10.540 [2024-11-17 04:23:56.245424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:10.540 [2024-11-17 04:23:56.245441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:10.540 [2024-11-17 04:23:56.245452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:10.540 [2024-11-17 04:23:56.245467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:10.540 [2024-11-17 04:23:56.245480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.540 [2024-11-17 04:23:56.245496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:10.540 [2024-11-17 04:23:56.245511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.743 ms 00:18:10.540 [2024-11-17 04:23:56.245526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.540 [2024-11-17 04:23:56.248445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.540 [2024-11-17 04:23:56.248503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:10.540 [2024-11-17 04:23:56.248520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.859 ms 00:18:10.540 [2024-11-17 04:23:56.248540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.540 [2024-11-17 04:23:56.248715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.540 [2024-11-17 04:23:56.248745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:10.540 [2024-11-17 04:23:56.248762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:18:10.540 [2024-11-17 04:23:56.248779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.540 [2024-11-17 04:23:56.257665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.540 [2024-11-17 04:23:56.257731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.540 [2024-11-17 04:23:56.257748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.540 [2024-11-17 04:23:56.257766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.540 [2024-11-17 04:23:56.257857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.540 [2024-11-17 04:23:56.257874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.540 [2024-11-17 04:23:56.257889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.540 [2024-11-17 04:23:56.257904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.540 [2024-11-17 04:23:56.258042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.540 [2024-11-17 04:23:56.258071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.540 [2024-11-17 04:23:56.258085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.540 [2024-11-17 04:23:56.258101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.540 [2024-11-17 04:23:56.258131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.540 [2024-11-17 04:23:56.258148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.540 [2024-11-17 04:23:56.258161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.540 [2024-11-17 04:23:56.258177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.801 [2024-11-17 04:23:56.273038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.801 [2024-11-17 04:23:56.273115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.801 [2024-11-17 04:23:56.273132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.801 [2024-11-17 04:23:56.273150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.801 [2024-11-17 04:23:56.284037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.801 [2024-11-17 04:23:56.284110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.801 [2024-11-17 04:23:56.284126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.801 [2024-11-17 04:23:56.284140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.801 [2024-11-17 04:23:56.284241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.801 [2024-11-17 04:23:56.284262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.801 [2024-11-17 04:23:56.284289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.801 [2024-11-17 04:23:56.284305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.801 [2024-11-17 04:23:56.284391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.801 [2024-11-17 04:23:56.284415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.801 [2024-11-17 04:23:56.284429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.801 [2024-11-17 04:23:56.284445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.801 [2024-11-17 04:23:56.284560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.801 [2024-11-17 04:23:56.284580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.801 [2024-11-17 04:23:56.284594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.801 [2024-11-17 04:23:56.284610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.801 [2024-11-17 04:23:56.284665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.801 [2024-11-17 04:23:56.284688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:10.801 [2024-11-17 04:23:56.284710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.801 [2024-11-17 04:23:56.284726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.801 [2024-11-17 04:23:56.284785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.801 [2024-11-17 04:23:56.284808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.801 [2024-11-17 04:23:56.284823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.801 [2024-11-17 04:23:56.284840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.802 [2024-11-17 04:23:56.284912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.802 [2024-11-17 04:23:56.284941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.802 [2024-11-17 04:23:56.284955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.802 [2024-11-17 04:23:56.284974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.802 [2024-11-17 04:23:56.285172] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.747 ms, result 0 00:18:10.802 true 00:18:10.802 04:23:56 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 85851 00:18:10.802 04:23:56 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 85851 ']' 00:18:10.802 04:23:56 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 85851 00:18:10.802 04:23:56 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:18:10.802 04:23:56 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.802 04:23:56 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85851 00:18:10.802 04:23:56 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.802 04:23:56 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.802 04:23:56 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85851' 00:18:10.802 killing process with pid 85851 00:18:10.802 04:23:56 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 85851 00:18:10.802 04:23:56 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 85851 00:18:16.093 04:24:01 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:20.294 262144+0 records in 00:18:20.294 262144+0 records out 00:18:20.294 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.96183 s, 271 MB/s 00:18:20.294 04:24:05 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:22.209 04:24:07 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:22.209 [2024-11-17 04:24:07.766518] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:18:22.209 [2024-11-17 04:24:07.766660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86065 ] 00:18:22.209 [2024-11-17 04:24:07.927646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.470 [2024-11-17 04:24:07.958088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.470 [2024-11-17 04:24:08.072964] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.470 [2024-11-17 04:24:08.073046] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.732 [2024-11-17 04:24:08.234617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.732 [2024-11-17 04:24:08.234687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:22.732 [2024-11-17 04:24:08.234702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:22.732 [2024-11-17 04:24:08.234712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.732 [2024-11-17 04:24:08.234776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.732 [2024-11-17 04:24:08.234787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.732 [2024-11-17 04:24:08.234796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:22.732 [2024-11-17 04:24:08.234809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.732 [2024-11-17 04:24:08.234834] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:22.732 [2024-11-17 04:24:08.235230] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:22.732 [2024-11-17 04:24:08.235282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.732 [2024-11-17 04:24:08.235291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.732 [2024-11-17 04:24:08.235305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:18:22.732 [2024-11-17 04:24:08.235317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.732 [2024-11-17 04:24:08.237105] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:22.732 [2024-11-17 04:24:08.241023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.732 [2024-11-17 04:24:08.241087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:22.732 [2024-11-17 04:24:08.241100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.920 ms 00:18:22.732 [2024-11-17 04:24:08.241112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.732 [2024-11-17 04:24:08.241191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.732 [2024-11-17 04:24:08.241205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:22.732 [2024-11-17 04:24:08.241214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:22.732 [2024-11-17 04:24:08.241221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.732 [2024-11-17 04:24:08.249666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.732 [2024-11-17 04:24:08.249713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.732 [2024-11-17 04:24:08.249732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.400 ms 00:18:22.732 [2024-11-17 04:24:08.249741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.732 [2024-11-17 04:24:08.249848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.732 [2024-11-17 04:24:08.249860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.732 [2024-11-17 04:24:08.249869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:18:22.732 [2024-11-17 04:24:08.249880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.732 [2024-11-17 04:24:08.249943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.732 [2024-11-17 04:24:08.249953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:22.732 [2024-11-17 04:24:08.249962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:22.732 [2024-11-17 04:24:08.249970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.732 [2024-11-17 04:24:08.250003] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.732 [2024-11-17 04:24:08.252143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.732 [2024-11-17 04:24:08.252183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.732 [2024-11-17 04:24:08.252193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.147 ms 00:18:22.732 [2024-11-17 04:24:08.252201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.732 [2024-11-17 04:24:08.252237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.732 [2024-11-17 04:24:08.252246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:22.732 [2024-11-17 04:24:08.252255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:22.732 [2024-11-17 04:24:08.252263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.732 [2024-11-17 04:24:08.252312] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:22.732 [2024-11-17 04:24:08.252340] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:22.732 [2024-11-17 04:24:08.252397] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:22.732 [2024-11-17 04:24:08.252418] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:22.732 [2024-11-17 04:24:08.252527] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:22.732 [2024-11-17 04:24:08.252542] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:22.732 [2024-11-17 04:24:08.252554] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:22.732 [2024-11-17 04:24:08.252573] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:22.732 [2024-11-17 04:24:08.252582] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:22.732 [2024-11-17 04:24:08.252591] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:22.732 [2024-11-17 04:24:08.252598] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:22.732 [2024-11-17 04:24:08.252606] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:22.732 [2024-11-17 04:24:08.252614] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:22.732 [2024-11-17 04:24:08.252623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.732 [2024-11-17 04:24:08.252631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:22.732 [2024-11-17 04:24:08.252643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:18:22.732 [2024-11-17 04:24:08.252653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.732 [2024-11-17 04:24:08.252737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.732 [2024-11-17 04:24:08.252749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:22.732 [2024-11-17 04:24:08.252757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:22.732 [2024-11-17 04:24:08.252764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.732 [2024-11-17 04:24:08.252863] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:22.732 [2024-11-17 04:24:08.252888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:22.732 [2024-11-17 04:24:08.252901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.732 [2024-11-17 04:24:08.252911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.732 [2024-11-17 04:24:08.252920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:22.732 [2024-11-17 04:24:08.252935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:22.732 [2024-11-17 04:24:08.252944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:22.733 [2024-11-17 04:24:08.252952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:22.733 [2024-11-17 04:24:08.252960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:22.733 [2024-11-17 04:24:08.252969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.733 [2024-11-17 04:24:08.252982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:22.733 [2024-11-17 04:24:08.252991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:22.733 [2024-11-17 04:24:08.252999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.733 [2024-11-17 04:24:08.253007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:22.733 [2024-11-17 04:24:08.253015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:22.733 [2024-11-17 04:24:08.253023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.733 [2024-11-17 04:24:08.253032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:22.733 [2024-11-17 04:24:08.253041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:22.733 [2024-11-17 04:24:08.253048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.733 [2024-11-17 04:24:08.253057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:22.733 [2024-11-17 04:24:08.253065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:22.733 [2024-11-17 04:24:08.253073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.733 [2024-11-17 04:24:08.253081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:22.733 [2024-11-17 04:24:08.253090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:22.733 [2024-11-17 04:24:08.253098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.733 [2024-11-17 04:24:08.253107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:22.733 [2024-11-17 04:24:08.253120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:22.733 [2024-11-17 04:24:08.253129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.733 [2024-11-17 04:24:08.253137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:22.733 [2024-11-17 04:24:08.253145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:22.733 [2024-11-17 04:24:08.253152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.733 [2024-11-17 04:24:08.253160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:22.733 [2024-11-17 04:24:08.253168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:22.733 [2024-11-17 04:24:08.253176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.733 [2024-11-17 04:24:08.253184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:22.733 [2024-11-17 04:24:08.253191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:22.733 [2024-11-17 04:24:08.253199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.733 [2024-11-17 04:24:08.253206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:22.733 [2024-11-17 04:24:08.253215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:22.733 [2024-11-17 04:24:08.253222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.733 [2024-11-17 04:24:08.253230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:22.733 [2024-11-17 04:24:08.253237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:22.733 [2024-11-17 04:24:08.253247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.733 [2024-11-17 04:24:08.253256] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:22.733 [2024-11-17 04:24:08.253268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:22.733 [2024-11-17 04:24:08.253280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.733 [2024-11-17 04:24:08.253289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.733 [2024-11-17 04:24:08.253298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:22.733 [2024-11-17 04:24:08.253306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:22.733 [2024-11-17 04:24:08.253314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:22.733 [2024-11-17 04:24:08.253323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:22.733 [2024-11-17 04:24:08.253331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:22.733 [2024-11-17 04:24:08.253337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:22.733 [2024-11-17 04:24:08.253346] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:22.733 [2024-11-17 04:24:08.253359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.733 [2024-11-17 04:24:08.253368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:22.733 [2024-11-17 04:24:08.253398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:22.733 [2024-11-17 04:24:08.253406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:22.733 [2024-11-17 04:24:08.253416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:22.733 [2024-11-17 04:24:08.253425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:22.733 [2024-11-17 04:24:08.253432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:22.733 [2024-11-17 04:24:08.253439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:22.733 [2024-11-17 04:24:08.253447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:22.733 [2024-11-17 04:24:08.253455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:22.733 [2024-11-17 04:24:08.253463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:22.733 [2024-11-17 04:24:08.253470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:22.733 [2024-11-17 04:24:08.253477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:22.733 [2024-11-17 04:24:08.253484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:22.733 [2024-11-17 04:24:08.253492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:22.733 [2024-11-17 04:24:08.253499] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:22.733 [2024-11-17 04:24:08.253508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.733 [2024-11-17 04:24:08.253516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:22.733 [2024-11-17 04:24:08.253523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:22.733 [2024-11-17 04:24:08.253531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:22.733 [2024-11-17 04:24:08.253540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:22.733 [2024-11-17 04:24:08.253550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.733 [2024-11-17 04:24:08.253559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:22.733 [2024-11-17 04:24:08.253574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:18:22.733 [2024-11-17 04:24:08.253582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.733 [2024-11-17 04:24:08.268216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.733 [2024-11-17 04:24:08.268268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.733 [2024-11-17 04:24:08.268310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.580 ms 00:18:22.733 [2024-11-17 04:24:08.268318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.733 [2024-11-17 04:24:08.268428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.733 [2024-11-17 04:24:08.268439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:22.733 [2024-11-17 04:24:08.268448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:22.733 [2024-11-17 04:24:08.268456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.733 [2024-11-17 04:24:08.288008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.733 [2024-11-17 04:24:08.288070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.733 [2024-11-17 04:24:08.288084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.487 ms 00:18:22.733 [2024-11-17 04:24:08.288093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.733 [2024-11-17 04:24:08.288143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.288155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.734 [2024-11-17 04:24:08.288164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:22.734 [2024-11-17 04:24:08.288179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.288837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.288872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.734 [2024-11-17 04:24:08.288889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:18:22.734 [2024-11-17 04:24:08.288899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.289059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.289073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.734 [2024-11-17 04:24:08.289083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:18:22.734 [2024-11-17 04:24:08.289091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.297406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.297458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.734 [2024-11-17 04:24:08.297476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.292 ms 00:18:22.734 [2024-11-17 04:24:08.297485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.301515] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:22.734 [2024-11-17 04:24:08.301571] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:22.734 [2024-11-17 04:24:08.301585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.301593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:22.734 [2024-11-17 04:24:08.301603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.007 ms 00:18:22.734 [2024-11-17 04:24:08.301610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.317394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.317445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:22.734 [2024-11-17 04:24:08.317461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.720 ms 00:18:22.734 [2024-11-17 04:24:08.317469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.320580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.320630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:22.734 [2024-11-17 04:24:08.320641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.053 ms 00:18:22.734 [2024-11-17 04:24:08.320649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.323490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.323538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:22.734 [2024-11-17 04:24:08.323549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.791 ms 00:18:22.734 [2024-11-17 04:24:08.323557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.323920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.323933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:22.734 [2024-11-17 04:24:08.323943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:18:22.734 [2024-11-17 04:24:08.323950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.348340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.348427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:22.734 [2024-11-17 04:24:08.348440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.371 ms 00:18:22.734 [2024-11-17 04:24:08.348449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.356936] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:22.734 [2024-11-17 04:24:08.360255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.360315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:22.734 [2024-11-17 04:24:08.360328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.751 ms 00:18:22.734 [2024-11-17 04:24:08.360340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.360464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.360477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:22.734 [2024-11-17 04:24:08.360487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:22.734 [2024-11-17 04:24:08.360495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.360567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.360577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:22.734 [2024-11-17 04:24:08.360589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:22.734 [2024-11-17 04:24:08.360597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.360626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.360634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:22.734 [2024-11-17 04:24:08.360643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:22.734 [2024-11-17 04:24:08.360652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.360690] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:22.734 [2024-11-17 04:24:08.360702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.360710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:22.734 [2024-11-17 04:24:08.360719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:22.734 [2024-11-17 04:24:08.360729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.366323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.366528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:22.734 [2024-11-17 04:24:08.366561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.571 ms 00:18:22.734 [2024-11-17 04:24:08.366570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.366650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.734 [2024-11-17 04:24:08.366661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:22.734 [2024-11-17 04:24:08.366673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:22.734 [2024-11-17 04:24:08.366686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.734 [2024-11-17 04:24:08.367834] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.747 ms, result 0 00:18:23.678  [2024-11-17T04:24:10.788Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-17T04:24:11.733Z] Copying: 40/1024 [MB] (20 MBps) [2024-11-17T04:24:12.677Z] Copying: 57/1024 [MB] (17 MBps) [2024-11-17T04:24:13.619Z] Copying: 81/1024 [MB] (23 MBps) [2024-11-17T04:24:14.562Z] Copying: 101/1024 [MB] (20 MBps) [2024-11-17T04:24:15.503Z] Copying: 113/1024 [MB] (12 MBps) [2024-11-17T04:24:16.445Z] Copying: 124/1024 [MB] (10 MBps) [2024-11-17T04:24:17.522Z] Copying: 134/1024 [MB] (10 MBps) [2024-11-17T04:24:18.467Z] Copying: 149/1024 [MB] (14 MBps) [2024-11-17T04:24:19.422Z] Copying: 163/1024 [MB] (14 MBps) [2024-11-17T04:24:20.808Z] Copying: 177/1024 [MB] (13 MBps) [2024-11-17T04:24:21.381Z] Copying: 194/1024 [MB] (17 MBps) [2024-11-17T04:24:22.767Z] Copying: 211/1024 [MB] (16 MBps) [2024-11-17T04:24:23.709Z] Copying: 227/1024 [MB] (16 MBps) [2024-11-17T04:24:24.652Z] Copying: 243/1024 [MB] (15 MBps) [2024-11-17T04:24:25.597Z] Copying: 253/1024 [MB] (10 MBps) [2024-11-17T04:24:26.545Z] Copying: 265/1024 [MB] (11 MBps) [2024-11-17T04:24:27.491Z] Copying: 276/1024 [MB] (11 MBps) [2024-11-17T04:24:28.432Z] Copying: 287/1024 [MB] (10 MBps) [2024-11-17T04:24:29.852Z] Copying: 308/1024 [MB] (21 MBps) [2024-11-17T04:24:30.422Z] Copying: 318/1024 [MB] (10 MBps) [2024-11-17T04:24:31.809Z] Copying: 329/1024 [MB] (11 MBps) [2024-11-17T04:24:32.382Z] Copying: 340/1024 [MB] (11 MBps) [2024-11-17T04:24:33.770Z] Copying: 351/1024 [MB] (10 MBps) [2024-11-17T04:24:34.711Z] Copying: 362/1024 [MB] (10 MBps) [2024-11-17T04:24:35.655Z] Copying: 372/1024 [MB] (10 MBps) [2024-11-17T04:24:36.596Z] Copying: 391/1024 [MB] (19 MBps) [2024-11-17T04:24:37.537Z] Copying: 405/1024 [MB] (13 MBps) [2024-11-17T04:24:38.484Z] Copying: 430/1024 [MB] (25 MBps) [2024-11-17T04:24:39.426Z] Copying: 447/1024 [MB] (17 MBps) [2024-11-17T04:24:40.819Z] Copying: 458/1024 [MB] (10 MBps) [2024-11-17T04:24:41.392Z] Copying: 475/1024 [MB] (16 MBps) [2024-11-17T04:24:42.781Z] Copying: 496844/1048576 [kB] (10016 kBps) [2024-11-17T04:24:43.724Z] Copying: 502/1024 [MB] (16 MBps) [2024-11-17T04:24:44.670Z] Copying: 518/1024 [MB] (16 MBps) [2024-11-17T04:24:45.613Z] Copying: 534/1024 [MB] (15 MBps) [2024-11-17T04:24:46.550Z] Copying: 544/1024 [MB] (10 MBps) [2024-11-17T04:24:47.493Z] Copying: 571/1024 [MB] (26 MBps) [2024-11-17T04:24:48.436Z] Copying: 597/1024 [MB] (26 MBps) [2024-11-17T04:24:49.453Z] Copying: 616/1024 [MB] (18 MBps) [2024-11-17T04:24:50.398Z] Copying: 632/1024 [MB] (16 MBps) [2024-11-17T04:24:51.787Z] Copying: 651/1024 [MB] (18 MBps) [2024-11-17T04:24:52.732Z] Copying: 661/1024 [MB] (10 MBps) [2024-11-17T04:24:53.679Z] Copying: 672/1024 [MB] (10 MBps) [2024-11-17T04:24:54.619Z] Copying: 684/1024 [MB] (12 MBps) [2024-11-17T04:24:55.554Z] Copying: 700/1024 [MB] (15 MBps) [2024-11-17T04:24:56.487Z] Copying: 732/1024 [MB] (31 MBps) [2024-11-17T04:24:57.424Z] Copying: 763/1024 [MB] (31 MBps) [2024-11-17T04:24:58.814Z] Copying: 794/1024 [MB] (30 MBps) [2024-11-17T04:24:59.393Z] Copying: 804/1024 [MB] (10 MBps) [2024-11-17T04:25:00.770Z] Copying: 818/1024 [MB] (13 MBps) [2024-11-17T04:25:01.715Z] Copying: 848/1024 [MB] (30 MBps) [2024-11-17T04:25:02.657Z] Copying: 863/1024 [MB] (14 MBps) [2024-11-17T04:25:03.590Z] Copying: 874/1024 [MB] (11 MBps) [2024-11-17T04:25:04.525Z] Copying: 906/1024 [MB] (31 MBps) [2024-11-17T04:25:05.459Z] Copying: 937/1024 [MB] (31 MBps) [2024-11-17T04:25:06.393Z] Copying: 968/1024 [MB] (31 MBps) [2024-11-17T04:25:07.332Z] Copying: 998/1024 [MB] (30 MBps) [2024-11-17T04:25:07.332Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-17 04:25:07.221071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.605 [2024-11-17 04:25:07.221171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:21.605 [2024-11-17 04:25:07.221226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:21.605 [2024-11-17 04:25:07.221277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.605 [2024-11-17 04:25:07.221315] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:21.605 [2024-11-17 04:25:07.221757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.605 [2024-11-17 04:25:07.221840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:21.605 [2024-11-17 04:25:07.221885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:19:21.605 [2024-11-17 04:25:07.221903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.605 [2024-11-17 04:25:07.223314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.605 [2024-11-17 04:25:07.223420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:21.605 [2024-11-17 04:25:07.223470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:19:21.605 [2024-11-17 04:25:07.223489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.605 [2024-11-17 04:25:07.235723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.605 [2024-11-17 04:25:07.235820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:21.605 [2024-11-17 04:25:07.235868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.174 ms 00:19:21.605 [2024-11-17 04:25:07.235886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.605 [2024-11-17 04:25:07.240710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.605 [2024-11-17 04:25:07.240796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:21.605 [2024-11-17 04:25:07.240837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.791 ms 00:19:21.605 [2024-11-17 04:25:07.240855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.605 [2024-11-17 04:25:07.241819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.605 [2024-11-17 04:25:07.241905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:21.605 [2024-11-17 04:25:07.241945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:19:21.605 [2024-11-17 04:25:07.241963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.605 [2024-11-17 04:25:07.245197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.605 [2024-11-17 04:25:07.245287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:21.605 [2024-11-17 04:25:07.245325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.203 ms 00:19:21.605 [2024-11-17 04:25:07.245342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.605 [2024-11-17 04:25:07.245469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.605 [2024-11-17 04:25:07.245532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:21.605 [2024-11-17 04:25:07.245565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:21.605 [2024-11-17 04:25:07.245581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.605 [2024-11-17 04:25:07.247328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.605 [2024-11-17 04:25:07.247427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:21.605 [2024-11-17 04:25:07.247469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:19:21.605 [2024-11-17 04:25:07.247487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.605 [2024-11-17 04:25:07.248601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.605 [2024-11-17 04:25:07.248682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:21.605 [2024-11-17 04:25:07.248720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:19:21.605 [2024-11-17 04:25:07.248737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.605 [2024-11-17 04:25:07.249882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.605 [2024-11-17 04:25:07.249963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:21.605 [2024-11-17 04:25:07.250002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:19:21.605 [2024-11-17 04:25:07.250018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.605 [2024-11-17 04:25:07.250907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.605 [2024-11-17 04:25:07.250988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:21.605 [2024-11-17 04:25:07.251028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:19:21.605 [2024-11-17 04:25:07.251045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.605 [2024-11-17 04:25:07.251073] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:21.605 [2024-11-17 04:25:07.251126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.251985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.252008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.252032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.252055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.252079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.252102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.252126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.252150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.252173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.252197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.252220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.252244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.252267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.252291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.252671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.252906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.253094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.253242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.253402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.253430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.253450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.253471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.253493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.253514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:21.605 [2024-11-17 04:25:07.253534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.253987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:21.606 [2024-11-17 04:25:07.254775] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:21.606 [2024-11-17 04:25:07.254808] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 56f46c39-64c9-4241-8388-8d11dc80ea47 00:19:21.606 [2024-11-17 04:25:07.254829] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:21.606 [2024-11-17 04:25:07.254849] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:21.606 [2024-11-17 04:25:07.254867] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:21.606 [2024-11-17 04:25:07.254905] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:21.606 [2024-11-17 04:25:07.254924] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:21.606 [2024-11-17 04:25:07.254945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:21.606 [2024-11-17 04:25:07.254964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:21.606 [2024-11-17 04:25:07.254982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:21.606 [2024-11-17 04:25:07.255000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:21.606 [2024-11-17 04:25:07.255022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.606 [2024-11-17 04:25:07.255066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:21.606 [2024-11-17 04:25:07.255091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.948 ms 00:19:21.606 [2024-11-17 04:25:07.255112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.606 [2024-11-17 04:25:07.257551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.606 [2024-11-17 04:25:07.257615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:21.606 [2024-11-17 04:25:07.257639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.374 ms 00:19:21.606 [2024-11-17 04:25:07.257659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.606 [2024-11-17 04:25:07.257843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.606 [2024-11-17 04:25:07.257878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:21.606 [2024-11-17 04:25:07.257901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:21.606 [2024-11-17 04:25:07.257920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.606 [2024-11-17 04:25:07.265757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.606 [2024-11-17 04:25:07.265787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:21.606 [2024-11-17 04:25:07.265796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.606 [2024-11-17 04:25:07.265803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.606 [2024-11-17 04:25:07.265852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.606 [2024-11-17 04:25:07.265861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:21.606 [2024-11-17 04:25:07.265868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.606 [2024-11-17 04:25:07.265874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.606 [2024-11-17 04:25:07.265922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.606 [2024-11-17 04:25:07.265931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:21.606 [2024-11-17 04:25:07.265939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.606 [2024-11-17 04:25:07.265946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.606 [2024-11-17 04:25:07.265959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.607 [2024-11-17 04:25:07.265969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:21.607 [2024-11-17 04:25:07.265977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.607 [2024-11-17 04:25:07.265987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.607 [2024-11-17 04:25:07.274280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.607 [2024-11-17 04:25:07.274318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:21.607 [2024-11-17 04:25:07.274329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.607 [2024-11-17 04:25:07.274337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.607 [2024-11-17 04:25:07.281066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.607 [2024-11-17 04:25:07.281108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:21.607 [2024-11-17 04:25:07.281117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.607 [2024-11-17 04:25:07.281124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.607 [2024-11-17 04:25:07.281147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.607 [2024-11-17 04:25:07.281156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:21.607 [2024-11-17 04:25:07.281163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.607 [2024-11-17 04:25:07.281170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.607 [2024-11-17 04:25:07.281210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.607 [2024-11-17 04:25:07.281218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:21.607 [2024-11-17 04:25:07.281229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.607 [2024-11-17 04:25:07.281236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.607 [2024-11-17 04:25:07.281295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.607 [2024-11-17 04:25:07.281304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:21.607 [2024-11-17 04:25:07.281312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.607 [2024-11-17 04:25:07.281319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.607 [2024-11-17 04:25:07.281344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.607 [2024-11-17 04:25:07.281352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:21.607 [2024-11-17 04:25:07.281359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.607 [2024-11-17 04:25:07.281369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.607 [2024-11-17 04:25:07.281419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.607 [2024-11-17 04:25:07.281427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:21.607 [2024-11-17 04:25:07.281435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.607 [2024-11-17 04:25:07.281442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.607 [2024-11-17 04:25:07.281481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.607 [2024-11-17 04:25:07.281490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:21.607 [2024-11-17 04:25:07.281505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.607 [2024-11-17 04:25:07.281512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.607 [2024-11-17 04:25:07.281620] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.519 ms, result 0 00:19:22.178 00:19:22.178 00:19:22.178 04:25:07 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:22.178 [2024-11-17 04:25:07.777243] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:19:22.178 [2024-11-17 04:25:07.777415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86686 ] 00:19:22.440 [2024-11-17 04:25:07.939261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.440 [2024-11-17 04:25:07.968095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.440 [2024-11-17 04:25:08.083475] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.440 [2024-11-17 04:25:08.083560] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.703 [2024-11-17 04:25:08.244958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.703 [2024-11-17 04:25:08.245021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:22.703 [2024-11-17 04:25:08.245037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:22.703 [2024-11-17 04:25:08.245046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.703 [2024-11-17 04:25:08.245107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.703 [2024-11-17 04:25:08.245119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.703 [2024-11-17 04:25:08.245127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:22.703 [2024-11-17 04:25:08.245135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.703 [2024-11-17 04:25:08.245165] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:22.703 [2024-11-17 04:25:08.245475] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:22.703 [2024-11-17 04:25:08.245500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.703 [2024-11-17 04:25:08.245509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.703 [2024-11-17 04:25:08.245519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:19:22.703 [2024-11-17 04:25:08.245530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.703 [2024-11-17 04:25:08.247308] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:22.703 [2024-11-17 04:25:08.251040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.703 [2024-11-17 04:25:08.251094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:22.703 [2024-11-17 04:25:08.251107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.734 ms 00:19:22.703 [2024-11-17 04:25:08.251129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.703 [2024-11-17 04:25:08.251203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.703 [2024-11-17 04:25:08.251216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:22.703 [2024-11-17 04:25:08.251225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:22.703 [2024-11-17 04:25:08.251233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.703 [2024-11-17 04:25:08.259261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.703 [2024-11-17 04:25:08.259306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.703 [2024-11-17 04:25:08.259325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.984 ms 00:19:22.703 [2024-11-17 04:25:08.259333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.703 [2024-11-17 04:25:08.259457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.703 [2024-11-17 04:25:08.259472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.703 [2024-11-17 04:25:08.259482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:22.703 [2024-11-17 04:25:08.259496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.703 [2024-11-17 04:25:08.259558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.703 [2024-11-17 04:25:08.259569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:22.703 [2024-11-17 04:25:08.259579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:22.703 [2024-11-17 04:25:08.259587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.703 [2024-11-17 04:25:08.259614] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:22.703 [2024-11-17 04:25:08.261686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.703 [2024-11-17 04:25:08.261722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.703 [2024-11-17 04:25:08.261739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.077 ms 00:19:22.703 [2024-11-17 04:25:08.261747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.703 [2024-11-17 04:25:08.261780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.703 [2024-11-17 04:25:08.261789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:22.703 [2024-11-17 04:25:08.261797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:22.703 [2024-11-17 04:25:08.261805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.703 [2024-11-17 04:25:08.261834] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:22.703 [2024-11-17 04:25:08.261855] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:22.703 [2024-11-17 04:25:08.261892] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:22.703 [2024-11-17 04:25:08.261914] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:22.703 [2024-11-17 04:25:08.262021] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:22.703 [2024-11-17 04:25:08.262033] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:22.703 [2024-11-17 04:25:08.262044] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:22.703 [2024-11-17 04:25:08.262058] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:22.703 [2024-11-17 04:25:08.262067] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:22.703 [2024-11-17 04:25:08.262076] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:22.703 [2024-11-17 04:25:08.262088] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:22.703 [2024-11-17 04:25:08.262096] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:22.703 [2024-11-17 04:25:08.262103] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:22.703 [2024-11-17 04:25:08.262112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.703 [2024-11-17 04:25:08.262120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:22.703 [2024-11-17 04:25:08.262132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:19:22.703 [2024-11-17 04:25:08.262143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.703 [2024-11-17 04:25:08.262225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.704 [2024-11-17 04:25:08.262237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:22.704 [2024-11-17 04:25:08.262245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:22.704 [2024-11-17 04:25:08.262252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.704 [2024-11-17 04:25:08.262349] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:22.704 [2024-11-17 04:25:08.262360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:22.704 [2024-11-17 04:25:08.262370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.704 [2024-11-17 04:25:08.262401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.704 [2024-11-17 04:25:08.262410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:22.704 [2024-11-17 04:25:08.262425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:22.704 [2024-11-17 04:25:08.262433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:22.704 [2024-11-17 04:25:08.262442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:22.704 [2024-11-17 04:25:08.262450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:22.704 [2024-11-17 04:25:08.262458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.704 [2024-11-17 04:25:08.262470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:22.704 [2024-11-17 04:25:08.262478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:22.704 [2024-11-17 04:25:08.262486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.704 [2024-11-17 04:25:08.262494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:22.704 [2024-11-17 04:25:08.262501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:22.704 [2024-11-17 04:25:08.262509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.704 [2024-11-17 04:25:08.262518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:22.704 [2024-11-17 04:25:08.262527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:22.704 [2024-11-17 04:25:08.262535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.704 [2024-11-17 04:25:08.262543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:22.704 [2024-11-17 04:25:08.262551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:22.704 [2024-11-17 04:25:08.262560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.704 [2024-11-17 04:25:08.262568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:22.704 [2024-11-17 04:25:08.262575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:22.704 [2024-11-17 04:25:08.262583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.704 [2024-11-17 04:25:08.262590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:22.704 [2024-11-17 04:25:08.262603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:22.704 [2024-11-17 04:25:08.262611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.704 [2024-11-17 04:25:08.262621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:22.704 [2024-11-17 04:25:08.262628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:22.704 [2024-11-17 04:25:08.262636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.704 [2024-11-17 04:25:08.262645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:22.704 [2024-11-17 04:25:08.262653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:22.704 [2024-11-17 04:25:08.262669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.704 [2024-11-17 04:25:08.262677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:22.704 [2024-11-17 04:25:08.262685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:22.704 [2024-11-17 04:25:08.262692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.704 [2024-11-17 04:25:08.262700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:22.704 [2024-11-17 04:25:08.262708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:22.704 [2024-11-17 04:25:08.262715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.704 [2024-11-17 04:25:08.262722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:22.704 [2024-11-17 04:25:08.262728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:22.704 [2024-11-17 04:25:08.262739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.704 [2024-11-17 04:25:08.262746] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:22.704 [2024-11-17 04:25:08.262755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:22.704 [2024-11-17 04:25:08.262765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.704 [2024-11-17 04:25:08.262772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.704 [2024-11-17 04:25:08.262780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:22.704 [2024-11-17 04:25:08.262787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:22.704 [2024-11-17 04:25:08.262796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:22.704 [2024-11-17 04:25:08.262803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:22.704 [2024-11-17 04:25:08.262810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:22.704 [2024-11-17 04:25:08.262817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:22.704 [2024-11-17 04:25:08.262825] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:22.704 [2024-11-17 04:25:08.262836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.704 [2024-11-17 04:25:08.262845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:22.704 [2024-11-17 04:25:08.262852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:22.704 [2024-11-17 04:25:08.262859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:22.704 [2024-11-17 04:25:08.262868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:22.704 [2024-11-17 04:25:08.262876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:22.704 [2024-11-17 04:25:08.262882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:22.704 [2024-11-17 04:25:08.262890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:22.704 [2024-11-17 04:25:08.262897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:22.704 [2024-11-17 04:25:08.262905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:22.704 [2024-11-17 04:25:08.262912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:22.704 [2024-11-17 04:25:08.262919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:22.704 [2024-11-17 04:25:08.262926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:22.704 [2024-11-17 04:25:08.262934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:22.704 [2024-11-17 04:25:08.262941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:22.704 [2024-11-17 04:25:08.262948] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:22.704 [2024-11-17 04:25:08.262955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.704 [2024-11-17 04:25:08.262967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:22.704 [2024-11-17 04:25:08.262974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:22.704 [2024-11-17 04:25:08.262982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:22.704 [2024-11-17 04:25:08.262993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:22.704 [2024-11-17 04:25:08.263000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.704 [2024-11-17 04:25:08.263008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:22.704 [2024-11-17 04:25:08.263016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:19:22.704 [2024-11-17 04:25:08.263027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.704 [2024-11-17 04:25:08.277158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.704 [2024-11-17 04:25:08.277431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.704 [2024-11-17 04:25:08.277453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.079 ms 00:19:22.704 [2024-11-17 04:25:08.277473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.704 [2024-11-17 04:25:08.277564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.704 [2024-11-17 04:25:08.277574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:22.704 [2024-11-17 04:25:08.277588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:22.704 [2024-11-17 04:25:08.277596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.704 [2024-11-17 04:25:08.300753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.704 [2024-11-17 04:25:08.300816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.704 [2024-11-17 04:25:08.300831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.096 ms 00:19:22.704 [2024-11-17 04:25:08.300851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.704 [2024-11-17 04:25:08.300905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.704 [2024-11-17 04:25:08.300917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.704 [2024-11-17 04:25:08.300928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:22.704 [2024-11-17 04:25:08.300943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.704 [2024-11-17 04:25:08.301552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.704 [2024-11-17 04:25:08.301591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.704 [2024-11-17 04:25:08.301605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:19:22.705 [2024-11-17 04:25:08.301627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.301814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.705 [2024-11-17 04:25:08.301836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.705 [2024-11-17 04:25:08.301847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:19:22.705 [2024-11-17 04:25:08.301857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.310718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.705 [2024-11-17 04:25:08.310907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.705 [2024-11-17 04:25:08.310937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.835 ms 00:19:22.705 [2024-11-17 04:25:08.310945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.314806] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:22.705 [2024-11-17 04:25:08.314859] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:22.705 [2024-11-17 04:25:08.314872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.705 [2024-11-17 04:25:08.314880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:22.705 [2024-11-17 04:25:08.314890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.810 ms 00:19:22.705 [2024-11-17 04:25:08.314898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.330701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.705 [2024-11-17 04:25:08.330755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:22.705 [2024-11-17 04:25:08.330768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.743 ms 00:19:22.705 [2024-11-17 04:25:08.330776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.333725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.705 [2024-11-17 04:25:08.333911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:22.705 [2024-11-17 04:25:08.333929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.895 ms 00:19:22.705 [2024-11-17 04:25:08.333936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.336518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.705 [2024-11-17 04:25:08.336565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:22.705 [2024-11-17 04:25:08.336575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.544 ms 00:19:22.705 [2024-11-17 04:25:08.336583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.336936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.705 [2024-11-17 04:25:08.336955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:22.705 [2024-11-17 04:25:08.336966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:19:22.705 [2024-11-17 04:25:08.336975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.360862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.705 [2024-11-17 04:25:08.360922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:22.705 [2024-11-17 04:25:08.360936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.869 ms 00:19:22.705 [2024-11-17 04:25:08.360957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.369333] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:22.705 [2024-11-17 04:25:08.372537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.705 [2024-11-17 04:25:08.372583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:22.705 [2024-11-17 04:25:08.372595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.527 ms 00:19:22.705 [2024-11-17 04:25:08.372607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.372685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.705 [2024-11-17 04:25:08.372696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:22.705 [2024-11-17 04:25:08.372706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:22.705 [2024-11-17 04:25:08.372714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.372781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.705 [2024-11-17 04:25:08.372791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:22.705 [2024-11-17 04:25:08.372804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:22.705 [2024-11-17 04:25:08.372813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.372838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.705 [2024-11-17 04:25:08.372847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:22.705 [2024-11-17 04:25:08.372856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:22.705 [2024-11-17 04:25:08.372869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.372911] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:22.705 [2024-11-17 04:25:08.372921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.705 [2024-11-17 04:25:08.372930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:22.705 [2024-11-17 04:25:08.372940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:22.705 [2024-11-17 04:25:08.372951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.378911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.705 [2024-11-17 04:25:08.378964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:22.705 [2024-11-17 04:25:08.378976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.940 ms 00:19:22.705 [2024-11-17 04:25:08.378985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.379074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.705 [2024-11-17 04:25:08.379085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:22.705 [2024-11-17 04:25:08.379098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:22.705 [2024-11-17 04:25:08.379107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.705 [2024-11-17 04:25:08.380319] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.869 ms, result 0 00:19:24.095  [2024-11-17T04:25:10.767Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-17T04:25:11.709Z] Copying: 28/1024 [MB] (13 MBps) [2024-11-17T04:25:12.650Z] Copying: 41/1024 [MB] (13 MBps) [2024-11-17T04:25:13.592Z] Copying: 57/1024 [MB] (16 MBps) [2024-11-17T04:25:14.977Z] Copying: 69/1024 [MB] (11 MBps) [2024-11-17T04:25:15.923Z] Copying: 82/1024 [MB] (13 MBps) [2024-11-17T04:25:16.865Z] Copying: 102/1024 [MB] (19 MBps) [2024-11-17T04:25:17.805Z] Copying: 114/1024 [MB] (12 MBps) [2024-11-17T04:25:18.750Z] Copying: 141/1024 [MB] (26 MBps) [2024-11-17T04:25:19.721Z] Copying: 154/1024 [MB] (12 MBps) [2024-11-17T04:25:20.765Z] Copying: 172/1024 [MB] (18 MBps) [2024-11-17T04:25:21.711Z] Copying: 189/1024 [MB] (16 MBps) [2024-11-17T04:25:22.656Z] Copying: 210/1024 [MB] (21 MBps) [2024-11-17T04:25:23.603Z] Copying: 221/1024 [MB] (11 MBps) [2024-11-17T04:25:24.561Z] Copying: 233/1024 [MB] (12 MBps) [2024-11-17T04:25:25.948Z] Copying: 245/1024 [MB] (11 MBps) [2024-11-17T04:25:26.891Z] Copying: 256/1024 [MB] (11 MBps) [2024-11-17T04:25:27.836Z] Copying: 267/1024 [MB] (11 MBps) [2024-11-17T04:25:28.782Z] Copying: 285/1024 [MB] (17 MBps) [2024-11-17T04:25:29.730Z] Copying: 300/1024 [MB] (15 MBps) [2024-11-17T04:25:30.676Z] Copying: 320/1024 [MB] (19 MBps) [2024-11-17T04:25:31.620Z] Copying: 338/1024 [MB] (18 MBps) [2024-11-17T04:25:32.567Z] Copying: 353/1024 [MB] (14 MBps) [2024-11-17T04:25:33.955Z] Copying: 364/1024 [MB] (10 MBps) [2024-11-17T04:25:34.900Z] Copying: 375/1024 [MB] (10 MBps) [2024-11-17T04:25:35.845Z] Copying: 393/1024 [MB] (17 MBps) [2024-11-17T04:25:36.790Z] Copying: 406/1024 [MB] (13 MBps) [2024-11-17T04:25:37.735Z] Copying: 420/1024 [MB] (13 MBps) [2024-11-17T04:25:38.681Z] Copying: 432/1024 [MB] (11 MBps) [2024-11-17T04:25:39.624Z] Copying: 446/1024 [MB] (13 MBps) [2024-11-17T04:25:40.566Z] Copying: 456/1024 [MB] (10 MBps) [2024-11-17T04:25:41.954Z] Copying: 470/1024 [MB] (13 MBps) [2024-11-17T04:25:42.897Z] Copying: 485/1024 [MB] (15 MBps) [2024-11-17T04:25:43.837Z] Copying: 498/1024 [MB] (12 MBps) [2024-11-17T04:25:44.779Z] Copying: 510/1024 [MB] (11 MBps) [2024-11-17T04:25:45.725Z] Copying: 529/1024 [MB] (19 MBps) [2024-11-17T04:25:46.670Z] Copying: 548/1024 [MB] (19 MBps) [2024-11-17T04:25:47.612Z] Copying: 565/1024 [MB] (16 MBps) [2024-11-17T04:25:48.610Z] Copying: 587/1024 [MB] (21 MBps) [2024-11-17T04:25:50.000Z] Copying: 607/1024 [MB] (20 MBps) [2024-11-17T04:25:50.573Z] Copying: 627/1024 [MB] (19 MBps) [2024-11-17T04:25:51.957Z] Copying: 644/1024 [MB] (16 MBps) [2024-11-17T04:25:52.903Z] Copying: 657/1024 [MB] (13 MBps) [2024-11-17T04:25:53.849Z] Copying: 682/1024 [MB] (24 MBps) [2024-11-17T04:25:54.789Z] Copying: 701/1024 [MB] (19 MBps) [2024-11-17T04:25:55.735Z] Copying: 719/1024 [MB] (18 MBps) [2024-11-17T04:25:56.679Z] Copying: 735/1024 [MB] (15 MBps) [2024-11-17T04:25:57.620Z] Copying: 752/1024 [MB] (17 MBps) [2024-11-17T04:25:59.003Z] Copying: 771/1024 [MB] (18 MBps) [2024-11-17T04:25:59.573Z] Copying: 781/1024 [MB] (10 MBps) [2024-11-17T04:26:00.955Z] Copying: 792/1024 [MB] (10 MBps) [2024-11-17T04:26:01.894Z] Copying: 802/1024 [MB] (10 MBps) [2024-11-17T04:26:02.834Z] Copying: 813/1024 [MB] (10 MBps) [2024-11-17T04:26:03.771Z] Copying: 824/1024 [MB] (10 MBps) [2024-11-17T04:26:04.710Z] Copying: 834/1024 [MB] (10 MBps) [2024-11-17T04:26:05.653Z] Copying: 846/1024 [MB] (11 MBps) [2024-11-17T04:26:06.598Z] Copying: 860/1024 [MB] (14 MBps) [2024-11-17T04:26:07.987Z] Copying: 878/1024 [MB] (17 MBps) [2024-11-17T04:26:08.929Z] Copying: 899/1024 [MB] (21 MBps) [2024-11-17T04:26:09.870Z] Copying: 922/1024 [MB] (23 MBps) [2024-11-17T04:26:10.809Z] Copying: 946/1024 [MB] (23 MBps) [2024-11-17T04:26:11.749Z] Copying: 966/1024 [MB] (20 MBps) [2024-11-17T04:26:12.693Z] Copying: 989/1024 [MB] (22 MBps) [2024-11-17T04:26:13.640Z] Copying: 1010/1024 [MB] (20 MBps) [2024-11-17T04:26:13.640Z] Copying: 1023/1024 [MB] (13 MBps) [2024-11-17T04:26:14.584Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-17 04:26:14.514727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.857 [2024-11-17 04:26:14.514829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:28.857 [2024-11-17 04:26:14.514851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:28.857 [2024-11-17 04:26:14.514878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.857 [2024-11-17 04:26:14.514914] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:28.857 [2024-11-17 04:26:14.515953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.857 [2024-11-17 04:26:14.515989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:28.857 [2024-11-17 04:26:14.516003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:20:28.857 [2024-11-17 04:26:14.516014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.857 [2024-11-17 04:26:14.516292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.857 [2024-11-17 04:26:14.516305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:28.857 [2024-11-17 04:26:14.516315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:20:28.857 [2024-11-17 04:26:14.516324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.857 [2024-11-17 04:26:14.519848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.857 [2024-11-17 04:26:14.519881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:28.857 [2024-11-17 04:26:14.519893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.479 ms 00:20:28.857 [2024-11-17 04:26:14.519901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.857 [2024-11-17 04:26:14.526736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.858 [2024-11-17 04:26:14.526782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:28.858 [2024-11-17 04:26:14.526795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.815 ms 00:20:28.858 [2024-11-17 04:26:14.526805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.858 [2024-11-17 04:26:14.530243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.858 [2024-11-17 04:26:14.530305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:28.858 [2024-11-17 04:26:14.530317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.348 ms 00:20:28.858 [2024-11-17 04:26:14.530326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.858 [2024-11-17 04:26:14.537159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.858 [2024-11-17 04:26:14.537286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:28.858 [2024-11-17 04:26:14.537312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.751 ms 00:20:28.858 [2024-11-17 04:26:14.537329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.858 [2024-11-17 04:26:14.537631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.858 [2024-11-17 04:26:14.537655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:28.858 [2024-11-17 04:26:14.537673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:20:28.858 [2024-11-17 04:26:14.537689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.858 [2024-11-17 04:26:14.541495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.858 [2024-11-17 04:26:14.541567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:28.858 [2024-11-17 04:26:14.541587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.741 ms 00:20:28.858 [2024-11-17 04:26:14.541602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.858 [2024-11-17 04:26:14.544811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.858 [2024-11-17 04:26:14.544859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:28.858 [2024-11-17 04:26:14.544870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.145 ms 00:20:28.858 [2024-11-17 04:26:14.544878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.858 [2024-11-17 04:26:14.547232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.858 [2024-11-17 04:26:14.547287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:28.858 [2024-11-17 04:26:14.547298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.307 ms 00:20:28.858 [2024-11-17 04:26:14.547306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.858 [2024-11-17 04:26:14.550521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.858 [2024-11-17 04:26:14.550579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:28.858 [2024-11-17 04:26:14.550590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.160 ms 00:20:28.858 [2024-11-17 04:26:14.550598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.858 [2024-11-17 04:26:14.550640] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:28.858 [2024-11-17 04:26:14.550656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.550994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:28.858 [2024-11-17 04:26:14.551141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.551965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:28.859 [2024-11-17 04:26:14.552224] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:28.859 [2024-11-17 04:26:14.552248] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 56f46c39-64c9-4241-8388-8d11dc80ea47 00:20:28.859 [2024-11-17 04:26:14.552278] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:28.859 [2024-11-17 04:26:14.552298] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:28.859 [2024-11-17 04:26:14.552317] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:28.859 [2024-11-17 04:26:14.552352] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:28.859 [2024-11-17 04:26:14.552389] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:28.859 [2024-11-17 04:26:14.552423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:28.859 [2024-11-17 04:26:14.552517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:28.859 [2024-11-17 04:26:14.552541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:28.859 [2024-11-17 04:26:14.552550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:28.859 [2024-11-17 04:26:14.552573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.859 [2024-11-17 04:26:14.552590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:28.859 [2024-11-17 04:26:14.552601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.934 ms 00:20:28.859 [2024-11-17 04:26:14.552610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.859 [2024-11-17 04:26:14.556036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.859 [2024-11-17 04:26:14.556215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:28.859 [2024-11-17 04:26:14.556234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.401 ms 00:20:28.859 [2024-11-17 04:26:14.556243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.859 [2024-11-17 04:26:14.556432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.859 [2024-11-17 04:26:14.556450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:28.859 [2024-11-17 04:26:14.556462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:20:28.859 [2024-11-17 04:26:14.556478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.859 [2024-11-17 04:26:14.565729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.859 [2024-11-17 04:26:14.565783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.859 [2024-11-17 04:26:14.565795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.859 [2024-11-17 04:26:14.565804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.859 [2024-11-17 04:26:14.565874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.859 [2024-11-17 04:26:14.565882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.859 [2024-11-17 04:26:14.565892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.859 [2024-11-17 04:26:14.565900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.859 [2024-11-17 04:26:14.565991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.859 [2024-11-17 04:26:14.566003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.859 [2024-11-17 04:26:14.566011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.859 [2024-11-17 04:26:14.566020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.859 [2024-11-17 04:26:14.566036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.859 [2024-11-17 04:26:14.566047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.859 [2024-11-17 04:26:14.566056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.859 [2024-11-17 04:26:14.566064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.121 [2024-11-17 04:26:14.583000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.121 [2024-11-17 04:26:14.583062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.121 [2024-11-17 04:26:14.583075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.121 [2024-11-17 04:26:14.583083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.121 [2024-11-17 04:26:14.593899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.121 [2024-11-17 04:26:14.594154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.121 [2024-11-17 04:26:14.594173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.121 [2024-11-17 04:26:14.594182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.121 [2024-11-17 04:26:14.594240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.121 [2024-11-17 04:26:14.594250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.121 [2024-11-17 04:26:14.594259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.121 [2024-11-17 04:26:14.594268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.121 [2024-11-17 04:26:14.594305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.121 [2024-11-17 04:26:14.594315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.121 [2024-11-17 04:26:14.594330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.121 [2024-11-17 04:26:14.594338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.121 [2024-11-17 04:26:14.594453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.121 [2024-11-17 04:26:14.594469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.121 [2024-11-17 04:26:14.594487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.121 [2024-11-17 04:26:14.594495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.121 [2024-11-17 04:26:14.594536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.121 [2024-11-17 04:26:14.594546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:29.121 [2024-11-17 04:26:14.594559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.121 [2024-11-17 04:26:14.594570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.121 [2024-11-17 04:26:14.594612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.121 [2024-11-17 04:26:14.594622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.121 [2024-11-17 04:26:14.594630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.121 [2024-11-17 04:26:14.594639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.121 [2024-11-17 04:26:14.594693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.121 [2024-11-17 04:26:14.594704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.121 [2024-11-17 04:26:14.594717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.121 [2024-11-17 04:26:14.594725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.121 [2024-11-17 04:26:14.594859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 80.105 ms, result 0 00:20:29.121 00:20:29.121 00:20:29.121 04:26:14 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:31.669 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:31.669 04:26:17 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:31.669 [2024-11-17 04:26:17.123037] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:20:31.669 [2024-11-17 04:26:17.123196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87405 ] 00:20:31.669 [2024-11-17 04:26:17.284362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.669 [2024-11-17 04:26:17.313689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.939 [2024-11-17 04:26:17.429634] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.939 [2024-11-17 04:26:17.429712] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.939 [2024-11-17 04:26:17.591656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.940 [2024-11-17 04:26:17.591886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:31.940 [2024-11-17 04:26:17.591910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:31.940 [2024-11-17 04:26:17.591919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.940 [2024-11-17 04:26:17.591999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.940 [2024-11-17 04:26:17.592014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.940 [2024-11-17 04:26:17.592024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:31.940 [2024-11-17 04:26:17.592031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.940 [2024-11-17 04:26:17.592058] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:31.940 [2024-11-17 04:26:17.592322] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:31.940 [2024-11-17 04:26:17.592401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.940 [2024-11-17 04:26:17.592413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.940 [2024-11-17 04:26:17.592424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:20:31.940 [2024-11-17 04:26:17.592437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.940 [2024-11-17 04:26:17.594131] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:31.940 [2024-11-17 04:26:17.597898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.940 [2024-11-17 04:26:17.597956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:31.940 [2024-11-17 04:26:17.597968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.769 ms 00:20:31.940 [2024-11-17 04:26:17.597980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.940 [2024-11-17 04:26:17.598057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.940 [2024-11-17 04:26:17.598070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:31.940 [2024-11-17 04:26:17.598079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:31.940 [2024-11-17 04:26:17.598087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.940 [2024-11-17 04:26:17.606242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.940 [2024-11-17 04:26:17.606447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.940 [2024-11-17 04:26:17.606474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.109 ms 00:20:31.940 [2024-11-17 04:26:17.606483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.940 [2024-11-17 04:26:17.606588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.940 [2024-11-17 04:26:17.606601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.940 [2024-11-17 04:26:17.606611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:20:31.940 [2024-11-17 04:26:17.606622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.940 [2024-11-17 04:26:17.606682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.940 [2024-11-17 04:26:17.606693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:31.940 [2024-11-17 04:26:17.606708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:31.940 [2024-11-17 04:26:17.606716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.940 [2024-11-17 04:26:17.606740] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:31.940 [2024-11-17 04:26:17.608750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.940 [2024-11-17 04:26:17.608795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.940 [2024-11-17 04:26:17.608806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.015 ms 00:20:31.940 [2024-11-17 04:26:17.608814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.940 [2024-11-17 04:26:17.608846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.940 [2024-11-17 04:26:17.608855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:31.940 [2024-11-17 04:26:17.608863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:31.940 [2024-11-17 04:26:17.608871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.940 [2024-11-17 04:26:17.608901] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:31.940 [2024-11-17 04:26:17.608922] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:31.940 [2024-11-17 04:26:17.608965] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:31.940 [2024-11-17 04:26:17.608982] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:31.940 [2024-11-17 04:26:17.609088] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:31.940 [2024-11-17 04:26:17.609099] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:31.940 [2024-11-17 04:26:17.609111] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:31.940 [2024-11-17 04:26:17.609124] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:31.940 [2024-11-17 04:26:17.609133] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:31.940 [2024-11-17 04:26:17.609142] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:31.940 [2024-11-17 04:26:17.609150] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:31.940 [2024-11-17 04:26:17.609164] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:31.940 [2024-11-17 04:26:17.609175] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:31.940 [2024-11-17 04:26:17.609185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.940 [2024-11-17 04:26:17.609196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:31.940 [2024-11-17 04:26:17.609203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:20:31.940 [2024-11-17 04:26:17.609212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.940 [2024-11-17 04:26:17.609299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.940 [2024-11-17 04:26:17.609310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:31.940 [2024-11-17 04:26:17.609317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:31.940 [2024-11-17 04:26:17.609325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.940 [2024-11-17 04:26:17.609442] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:31.940 [2024-11-17 04:26:17.609455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:31.940 [2024-11-17 04:26:17.609465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.940 [2024-11-17 04:26:17.609478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.940 [2024-11-17 04:26:17.609487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:31.940 [2024-11-17 04:26:17.609501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:31.940 [2024-11-17 04:26:17.609510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:31.940 [2024-11-17 04:26:17.609543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:31.940 [2024-11-17 04:26:17.609552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:31.940 [2024-11-17 04:26:17.609561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.940 [2024-11-17 04:26:17.609572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:31.940 [2024-11-17 04:26:17.609580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:31.940 [2024-11-17 04:26:17.609587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.940 [2024-11-17 04:26:17.609595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:31.940 [2024-11-17 04:26:17.609604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:31.940 [2024-11-17 04:26:17.609613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.940 [2024-11-17 04:26:17.609621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:31.940 [2024-11-17 04:26:17.609629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:31.940 [2024-11-17 04:26:17.609638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.940 [2024-11-17 04:26:17.609647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:31.940 [2024-11-17 04:26:17.609656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:31.940 [2024-11-17 04:26:17.609664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.940 [2024-11-17 04:26:17.609672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:31.940 [2024-11-17 04:26:17.609680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:31.940 [2024-11-17 04:26:17.609688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.940 [2024-11-17 04:26:17.609697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:31.940 [2024-11-17 04:26:17.609712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:31.940 [2024-11-17 04:26:17.609722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.940 [2024-11-17 04:26:17.609730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:31.940 [2024-11-17 04:26:17.609750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:31.941 [2024-11-17 04:26:17.609758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.941 [2024-11-17 04:26:17.609767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:31.941 [2024-11-17 04:26:17.609775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:31.941 [2024-11-17 04:26:17.609782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.941 [2024-11-17 04:26:17.609790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:31.941 [2024-11-17 04:26:17.609799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:31.941 [2024-11-17 04:26:17.609806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.941 [2024-11-17 04:26:17.609814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:31.941 [2024-11-17 04:26:17.609821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:31.941 [2024-11-17 04:26:17.609827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.941 [2024-11-17 04:26:17.609833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:31.941 [2024-11-17 04:26:17.609841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:31.941 [2024-11-17 04:26:17.609851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.941 [2024-11-17 04:26:17.609858] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:31.941 [2024-11-17 04:26:17.609865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:31.941 [2024-11-17 04:26:17.609875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.941 [2024-11-17 04:26:17.609883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.941 [2024-11-17 04:26:17.609890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:31.941 [2024-11-17 04:26:17.609897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:31.941 [2024-11-17 04:26:17.609903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:31.941 [2024-11-17 04:26:17.609912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:31.941 [2024-11-17 04:26:17.609918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:31.941 [2024-11-17 04:26:17.609925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:31.941 [2024-11-17 04:26:17.609933] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:31.941 [2024-11-17 04:26:17.609943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.941 [2024-11-17 04:26:17.609952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:31.941 [2024-11-17 04:26:17.609959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:31.941 [2024-11-17 04:26:17.609966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:31.941 [2024-11-17 04:26:17.609976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:31.941 [2024-11-17 04:26:17.609984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:31.941 [2024-11-17 04:26:17.609991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:31.941 [2024-11-17 04:26:17.609997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:31.941 [2024-11-17 04:26:17.610005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:31.941 [2024-11-17 04:26:17.610012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:31.941 [2024-11-17 04:26:17.610019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:31.941 [2024-11-17 04:26:17.610027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:31.941 [2024-11-17 04:26:17.610033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:31.941 [2024-11-17 04:26:17.610042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:31.941 [2024-11-17 04:26:17.610049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:31.941 [2024-11-17 04:26:17.610056] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:31.941 [2024-11-17 04:26:17.610067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.941 [2024-11-17 04:26:17.610074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:31.941 [2024-11-17 04:26:17.610082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:31.941 [2024-11-17 04:26:17.610090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:31.941 [2024-11-17 04:26:17.610100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:31.941 [2024-11-17 04:26:17.610108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.941 [2024-11-17 04:26:17.610115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:31.941 [2024-11-17 04:26:17.610126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:20:31.941 [2024-11-17 04:26:17.610134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.941 [2024-11-17 04:26:17.624415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.941 [2024-11-17 04:26:17.624459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.941 [2024-11-17 04:26:17.624470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.228 ms 00:20:31.941 [2024-11-17 04:26:17.624478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.941 [2024-11-17 04:26:17.624568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.941 [2024-11-17 04:26:17.624577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:31.941 [2024-11-17 04:26:17.624587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:31.941 [2024-11-17 04:26:17.624602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.941 [2024-11-17 04:26:17.647975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.941 [2024-11-17 04:26:17.648236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.941 [2024-11-17 04:26:17.648262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.312 ms 00:20:31.941 [2024-11-17 04:26:17.648274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.941 [2024-11-17 04:26:17.648350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.941 [2024-11-17 04:26:17.648364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.941 [2024-11-17 04:26:17.648423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:31.941 [2024-11-17 04:26:17.648438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.941 [2024-11-17 04:26:17.649071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.941 [2024-11-17 04:26:17.649135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.941 [2024-11-17 04:26:17.649150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:20:31.941 [2024-11-17 04:26:17.649166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.941 [2024-11-17 04:26:17.649351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.941 [2024-11-17 04:26:17.649363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.941 [2024-11-17 04:26:17.649393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:20:31.941 [2024-11-17 04:26:17.649404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.941 [2024-11-17 04:26:17.658625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.941 [2024-11-17 04:26:17.658845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.941 [2024-11-17 04:26:17.658877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.193 ms 00:20:31.941 [2024-11-17 04:26:17.658886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.300 [2024-11-17 04:26:17.663021] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:32.300 [2024-11-17 04:26:17.663083] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:32.300 [2024-11-17 04:26:17.663097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.300 [2024-11-17 04:26:17.663106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:32.300 [2024-11-17 04:26:17.663115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.085 ms 00:20:32.300 [2024-11-17 04:26:17.663123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.300 [2024-11-17 04:26:17.679745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.300 [2024-11-17 04:26:17.679812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:32.300 [2024-11-17 04:26:17.679827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.552 ms 00:20:32.300 [2024-11-17 04:26:17.679836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.300 [2024-11-17 04:26:17.683154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.300 [2024-11-17 04:26:17.683212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:32.300 [2024-11-17 04:26:17.683224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.245 ms 00:20:32.300 [2024-11-17 04:26:17.683233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.300 [2024-11-17 04:26:17.686280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.300 [2024-11-17 04:26:17.686498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:32.300 [2024-11-17 04:26:17.686520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.994 ms 00:20:32.300 [2024-11-17 04:26:17.686528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.300 [2024-11-17 04:26:17.686963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.300 [2024-11-17 04:26:17.687006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:32.300 [2024-11-17 04:26:17.687020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:20:32.300 [2024-11-17 04:26:17.687036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.300 [2024-11-17 04:26:17.711210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.300 [2024-11-17 04:26:17.711275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:32.300 [2024-11-17 04:26:17.711290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.148 ms 00:20:32.300 [2024-11-17 04:26:17.711298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.300 [2024-11-17 04:26:17.720068] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:32.300 [2024-11-17 04:26:17.723413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.300 [2024-11-17 04:26:17.723466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:32.300 [2024-11-17 04:26:17.723477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.059 ms 00:20:32.300 [2024-11-17 04:26:17.723488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.300 [2024-11-17 04:26:17.723566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.300 [2024-11-17 04:26:17.723577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:32.300 [2024-11-17 04:26:17.723587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:32.300 [2024-11-17 04:26:17.723599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.300 [2024-11-17 04:26:17.723673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.300 [2024-11-17 04:26:17.723684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:32.300 [2024-11-17 04:26:17.723695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:32.300 [2024-11-17 04:26:17.723704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.300 [2024-11-17 04:26:17.723729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.300 [2024-11-17 04:26:17.723737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:32.300 [2024-11-17 04:26:17.723746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:32.300 [2024-11-17 04:26:17.723754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.300 [2024-11-17 04:26:17.723792] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:32.300 [2024-11-17 04:26:17.723803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.300 [2024-11-17 04:26:17.723812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:32.300 [2024-11-17 04:26:17.723824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:32.300 [2024-11-17 04:26:17.723839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.300 [2024-11-17 04:26:17.729329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.300 [2024-11-17 04:26:17.729400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:32.300 [2024-11-17 04:26:17.729412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.472 ms 00:20:32.300 [2024-11-17 04:26:17.729421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.300 [2024-11-17 04:26:17.729515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.300 [2024-11-17 04:26:17.729525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:32.300 [2024-11-17 04:26:17.729541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:32.300 [2024-11-17 04:26:17.729550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.300 [2024-11-17 04:26:17.730655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 138.527 ms, result 0 00:20:33.244  [2024-11-17T04:26:19.912Z] Copying: 10048/1048576 [kB] (10048 kBps) [2024-11-17T04:26:20.854Z] Copying: 19/1024 [MB] (10 MBps) [2024-11-17T04:26:21.796Z] Copying: 33/1024 [MB] (13 MBps) [2024-11-17T04:26:23.183Z] Copying: 54/1024 [MB] (21 MBps) [2024-11-17T04:26:23.753Z] Copying: 73/1024 [MB] (19 MBps) [2024-11-17T04:26:25.139Z] Copying: 94/1024 [MB] (20 MBps) [2024-11-17T04:26:26.083Z] Copying: 115/1024 [MB] (20 MBps) [2024-11-17T04:26:27.026Z] Copying: 129/1024 [MB] (14 MBps) [2024-11-17T04:26:27.969Z] Copying: 145/1024 [MB] (16 MBps) [2024-11-17T04:26:28.913Z] Copying: 161/1024 [MB] (15 MBps) [2024-11-17T04:26:29.859Z] Copying: 172/1024 [MB] (10 MBps) [2024-11-17T04:26:30.806Z] Copying: 182/1024 [MB] (10 MBps) [2024-11-17T04:26:31.751Z] Copying: 193/1024 [MB] (10 MBps) [2024-11-17T04:26:33.141Z] Copying: 204/1024 [MB] (11 MBps) [2024-11-17T04:26:34.087Z] Copying: 215/1024 [MB] (10 MBps) [2024-11-17T04:26:35.032Z] Copying: 225/1024 [MB] (10 MBps) [2024-11-17T04:26:35.979Z] Copying: 236/1024 [MB] (10 MBps) [2024-11-17T04:26:36.924Z] Copying: 246/1024 [MB] (10 MBps) [2024-11-17T04:26:37.857Z] Copying: 260/1024 [MB] (14 MBps) [2024-11-17T04:26:38.793Z] Copying: 316/1024 [MB] (55 MBps) [2024-11-17T04:26:40.178Z] Copying: 363/1024 [MB] (46 MBps) [2024-11-17T04:26:40.749Z] Copying: 379/1024 [MB] (15 MBps) [2024-11-17T04:26:42.134Z] Copying: 389/1024 [MB] (10 MBps) [2024-11-17T04:26:43.072Z] Copying: 400/1024 [MB] (10 MBps) [2024-11-17T04:26:44.012Z] Copying: 420/1024 [MB] (20 MBps) [2024-11-17T04:26:44.958Z] Copying: 451/1024 [MB] (31 MBps) [2024-11-17T04:26:45.901Z] Copying: 467/1024 [MB] (15 MBps) [2024-11-17T04:26:46.879Z] Copying: 486/1024 [MB] (19 MBps) [2024-11-17T04:26:47.820Z] Copying: 496/1024 [MB] (10 MBps) [2024-11-17T04:26:48.764Z] Copying: 520/1024 [MB] (23 MBps) [2024-11-17T04:26:50.163Z] Copying: 541/1024 [MB] (21 MBps) [2024-11-17T04:26:51.108Z] Copying: 558/1024 [MB] (16 MBps) [2024-11-17T04:26:52.053Z] Copying: 574/1024 [MB] (15 MBps) [2024-11-17T04:26:52.997Z] Copying: 593/1024 [MB] (18 MBps) [2024-11-17T04:26:53.937Z] Copying: 605/1024 [MB] (12 MBps) [2024-11-17T04:26:54.879Z] Copying: 622/1024 [MB] (16 MBps) [2024-11-17T04:26:55.821Z] Copying: 639/1024 [MB] (17 MBps) [2024-11-17T04:26:56.766Z] Copying: 649/1024 [MB] (10 MBps) [2024-11-17T04:26:58.155Z] Copying: 667/1024 [MB] (18 MBps) [2024-11-17T04:26:59.101Z] Copying: 684/1024 [MB] (16 MBps) [2024-11-17T04:27:00.062Z] Copying: 699/1024 [MB] (15 MBps) [2024-11-17T04:27:01.006Z] Copying: 724/1024 [MB] (25 MBps) [2024-11-17T04:27:01.950Z] Copying: 736/1024 [MB] (11 MBps) [2024-11-17T04:27:02.894Z] Copying: 747/1024 [MB] (10 MBps) [2024-11-17T04:27:03.866Z] Copying: 757/1024 [MB] (10 MBps) [2024-11-17T04:27:04.812Z] Copying: 768/1024 [MB] (10 MBps) [2024-11-17T04:27:05.758Z] Copying: 778/1024 [MB] (10 MBps) [2024-11-17T04:27:07.148Z] Copying: 789/1024 [MB] (10 MBps) [2024-11-17T04:27:08.092Z] Copying: 799/1024 [MB] (10 MBps) [2024-11-17T04:27:09.036Z] Copying: 809/1024 [MB] (10 MBps) [2024-11-17T04:27:09.980Z] Copying: 839272/1048576 [kB] (10160 kBps) [2024-11-17T04:27:10.923Z] Copying: 835/1024 [MB] (16 MBps) [2024-11-17T04:27:11.868Z] Copying: 851/1024 [MB] (15 MBps) [2024-11-17T04:27:12.812Z] Copying: 869/1024 [MB] (17 MBps) [2024-11-17T04:27:13.757Z] Copying: 881/1024 [MB] (12 MBps) [2024-11-17T04:27:14.777Z] Copying: 892/1024 [MB] (11 MBps) [2024-11-17T04:27:15.746Z] Copying: 903/1024 [MB] (10 MBps) [2024-11-17T04:27:17.137Z] Copying: 913/1024 [MB] (10 MBps) [2024-11-17T04:27:18.080Z] Copying: 923/1024 [MB] (10 MBps) [2024-11-17T04:27:19.027Z] Copying: 934/1024 [MB] (11 MBps) [2024-11-17T04:27:19.969Z] Copying: 945/1024 [MB] (10 MBps) [2024-11-17T04:27:20.914Z] Copying: 956/1024 [MB] (10 MBps) [2024-11-17T04:27:21.858Z] Copying: 966/1024 [MB] (10 MBps) [2024-11-17T04:27:22.803Z] Copying: 977/1024 [MB] (10 MBps) [2024-11-17T04:27:23.748Z] Copying: 987/1024 [MB] (10 MBps) [2024-11-17T04:27:25.138Z] Copying: 997/1024 [MB] (10 MBps) [2024-11-17T04:27:26.083Z] Copying: 1008/1024 [MB] (10 MBps) [2024-11-17T04:27:27.022Z] Copying: 1042488/1048576 [kB] (10216 kBps) [2024-11-17T04:27:27.281Z] Copying: 1048172/1048576 [kB] (5684 kBps) [2024-11-17T04:27:27.281Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-17 04:27:27.086028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.554 [2024-11-17 04:27:27.086078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:41.554 [2024-11-17 04:27:27.086090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:41.554 [2024-11-17 04:27:27.086097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.554 [2024-11-17 04:27:27.087784] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:41.554 [2024-11-17 04:27:27.090707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.554 [2024-11-17 04:27:27.090805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:41.554 [2024-11-17 04:27:27.090853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.822 ms 00:21:41.554 [2024-11-17 04:27:27.090876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.554 [2024-11-17 04:27:27.100125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.554 [2024-11-17 04:27:27.100220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:41.554 [2024-11-17 04:27:27.100269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.018 ms 00:21:41.554 [2024-11-17 04:27:27.100288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.554 [2024-11-17 04:27:27.114971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.554 [2024-11-17 04:27:27.115066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:41.554 [2024-11-17 04:27:27.115115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.659 ms 00:21:41.554 [2024-11-17 04:27:27.115133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.554 [2024-11-17 04:27:27.119950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.554 [2024-11-17 04:27:27.120037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:41.554 [2024-11-17 04:27:27.120086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.780 ms 00:21:41.555 [2024-11-17 04:27:27.120110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.555 [2024-11-17 04:27:27.121032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.555 [2024-11-17 04:27:27.121122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:41.555 [2024-11-17 04:27:27.121161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:21:41.555 [2024-11-17 04:27:27.121178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.555 [2024-11-17 04:27:27.124173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.555 [2024-11-17 04:27:27.124261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:41.555 [2024-11-17 04:27:27.124301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.965 ms 00:21:41.555 [2024-11-17 04:27:27.124318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.555 [2024-11-17 04:27:27.170581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.555 [2024-11-17 04:27:27.170678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:41.555 [2024-11-17 04:27:27.170721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.224 ms 00:21:41.555 [2024-11-17 04:27:27.170739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.555 [2024-11-17 04:27:27.172271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.555 [2024-11-17 04:27:27.172357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:41.555 [2024-11-17 04:27:27.172418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.497 ms 00:21:41.555 [2024-11-17 04:27:27.172437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.555 [2024-11-17 04:27:27.173562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.555 [2024-11-17 04:27:27.173644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:41.555 [2024-11-17 04:27:27.173682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.094 ms 00:21:41.555 [2024-11-17 04:27:27.173700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.555 [2024-11-17 04:27:27.174567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.555 [2024-11-17 04:27:27.174652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:41.555 [2024-11-17 04:27:27.174691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:21:41.555 [2024-11-17 04:27:27.174708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.555 [2024-11-17 04:27:27.175591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.555 [2024-11-17 04:27:27.175675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:41.555 [2024-11-17 04:27:27.175686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:21:41.555 [2024-11-17 04:27:27.175692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.555 [2024-11-17 04:27:27.175712] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:41.555 [2024-11-17 04:27:27.175724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 113920 / 261120 wr_cnt: 1 state: open 00:21:41.555 [2024-11-17 04:27:27.175732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.175994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:41.555 [2024-11-17 04:27:27.176174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:41.556 [2024-11-17 04:27:27.176328] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:41.556 [2024-11-17 04:27:27.176334] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 56f46c39-64c9-4241-8388-8d11dc80ea47 00:21:41.556 [2024-11-17 04:27:27.176341] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 113920 00:21:41.556 [2024-11-17 04:27:27.176347] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 114880 00:21:41.556 [2024-11-17 04:27:27.176361] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 113920 00:21:41.556 [2024-11-17 04:27:27.176368] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0084 00:21:41.556 [2024-11-17 04:27:27.176388] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:41.556 [2024-11-17 04:27:27.176395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:41.556 [2024-11-17 04:27:27.176406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:41.556 [2024-11-17 04:27:27.176411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:41.556 [2024-11-17 04:27:27.176416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:41.556 [2024-11-17 04:27:27.176426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.556 [2024-11-17 04:27:27.176434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:41.556 [2024-11-17 04:27:27.176441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:21:41.556 [2024-11-17 04:27:27.176447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.556 [2024-11-17 04:27:27.177719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.556 [2024-11-17 04:27:27.177733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:41.556 [2024-11-17 04:27:27.177740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:21:41.556 [2024-11-17 04:27:27.177750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.556 [2024-11-17 04:27:27.177819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.556 [2024-11-17 04:27:27.177830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:41.556 [2024-11-17 04:27:27.177837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:41.556 [2024-11-17 04:27:27.177843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.556 [2024-11-17 04:27:27.182030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.556 [2024-11-17 04:27:27.182051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:41.556 [2024-11-17 04:27:27.182058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.556 [2024-11-17 04:27:27.182064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.556 [2024-11-17 04:27:27.182106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.556 [2024-11-17 04:27:27.182112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:41.556 [2024-11-17 04:27:27.182118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.556 [2024-11-17 04:27:27.182124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.556 [2024-11-17 04:27:27.182153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.556 [2024-11-17 04:27:27.182160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:41.556 [2024-11-17 04:27:27.182166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.556 [2024-11-17 04:27:27.182171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.556 [2024-11-17 04:27:27.182182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.556 [2024-11-17 04:27:27.182187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:41.556 [2024-11-17 04:27:27.182193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.556 [2024-11-17 04:27:27.182199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.556 [2024-11-17 04:27:27.189733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.556 [2024-11-17 04:27:27.189768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:41.556 [2024-11-17 04:27:27.189775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.556 [2024-11-17 04:27:27.189781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.556 [2024-11-17 04:27:27.195944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.556 [2024-11-17 04:27:27.195977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:41.556 [2024-11-17 04:27:27.195985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.556 [2024-11-17 04:27:27.195991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.556 [2024-11-17 04:27:27.196031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.556 [2024-11-17 04:27:27.196041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:41.556 [2024-11-17 04:27:27.196049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.556 [2024-11-17 04:27:27.196055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.556 [2024-11-17 04:27:27.196076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.556 [2024-11-17 04:27:27.196082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:41.556 [2024-11-17 04:27:27.196088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.556 [2024-11-17 04:27:27.196094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.556 [2024-11-17 04:27:27.196146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.556 [2024-11-17 04:27:27.196156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:41.556 [2024-11-17 04:27:27.196164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.556 [2024-11-17 04:27:27.196171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.556 [2024-11-17 04:27:27.196194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.556 [2024-11-17 04:27:27.196202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:41.556 [2024-11-17 04:27:27.196208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.556 [2024-11-17 04:27:27.196214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.556 [2024-11-17 04:27:27.196240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.556 [2024-11-17 04:27:27.196247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:41.556 [2024-11-17 04:27:27.196255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.556 [2024-11-17 04:27:27.196261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.556 [2024-11-17 04:27:27.196292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.556 [2024-11-17 04:27:27.196300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:41.556 [2024-11-17 04:27:27.196306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.556 [2024-11-17 04:27:27.196311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.556 [2024-11-17 04:27:27.196425] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 111.070 ms, result 0 00:21:42.498 00:21:42.498 00:21:42.498 04:27:27 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:42.498 [2024-11-17 04:27:27.997686] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:21:42.498 [2024-11-17 04:27:27.997840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88138 ] 00:21:42.498 [2024-11-17 04:27:28.159986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.498 [2024-11-17 04:27:28.188977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.759 [2024-11-17 04:27:28.301795] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:42.759 [2024-11-17 04:27:28.301882] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:42.759 [2024-11-17 04:27:28.463288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.759 [2024-11-17 04:27:28.463355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:42.759 [2024-11-17 04:27:28.463371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:42.759 [2024-11-17 04:27:28.463409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.759 [2024-11-17 04:27:28.463473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.759 [2024-11-17 04:27:28.463487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:42.759 [2024-11-17 04:27:28.463496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:42.759 [2024-11-17 04:27:28.463505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.759 [2024-11-17 04:27:28.463532] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:42.759 [2024-11-17 04:27:28.463813] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:42.759 [2024-11-17 04:27:28.463831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.759 [2024-11-17 04:27:28.463839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:42.759 [2024-11-17 04:27:28.463852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:21:42.759 [2024-11-17 04:27:28.463863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.759 [2024-11-17 04:27:28.465608] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:42.759 [2024-11-17 04:27:28.469689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.759 [2024-11-17 04:27:28.469738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:42.759 [2024-11-17 04:27:28.469758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.082 ms 00:21:42.759 [2024-11-17 04:27:28.469770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.759 [2024-11-17 04:27:28.469848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.759 [2024-11-17 04:27:28.469863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:42.759 [2024-11-17 04:27:28.469872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:42.759 [2024-11-17 04:27:28.469880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.759 [2024-11-17 04:27:28.477942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.759 [2024-11-17 04:27:28.477993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:42.759 [2024-11-17 04:27:28.478008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.019 ms 00:21:42.759 [2024-11-17 04:27:28.478017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.759 [2024-11-17 04:27:28.478120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.759 [2024-11-17 04:27:28.478131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:42.759 [2024-11-17 04:27:28.478144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:21:42.759 [2024-11-17 04:27:28.478152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.759 [2024-11-17 04:27:28.478211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.759 [2024-11-17 04:27:28.478222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:42.759 [2024-11-17 04:27:28.478231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:42.759 [2024-11-17 04:27:28.478238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.759 [2024-11-17 04:27:28.478264] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:42.759 [2024-11-17 04:27:28.480416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.759 [2024-11-17 04:27:28.480460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:42.759 [2024-11-17 04:27:28.480471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:21:42.759 [2024-11-17 04:27:28.480479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.759 [2024-11-17 04:27:28.480514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.759 [2024-11-17 04:27:28.480523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:42.759 [2024-11-17 04:27:28.480531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:42.759 [2024-11-17 04:27:28.480545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.759 [2024-11-17 04:27:28.480570] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:42.759 [2024-11-17 04:27:28.480592] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:42.759 [2024-11-17 04:27:28.480632] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:42.759 [2024-11-17 04:27:28.480654] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:42.759 [2024-11-17 04:27:28.480761] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:42.759 [2024-11-17 04:27:28.480773] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:42.759 [2024-11-17 04:27:28.480784] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:42.759 [2024-11-17 04:27:28.480798] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:42.759 [2024-11-17 04:27:28.480807] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:42.759 [2024-11-17 04:27:28.480820] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:42.759 [2024-11-17 04:27:28.480829] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:42.759 [2024-11-17 04:27:28.480837] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:42.759 [2024-11-17 04:27:28.480845] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:42.759 [2024-11-17 04:27:28.480854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.760 [2024-11-17 04:27:28.480866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:42.760 [2024-11-17 04:27:28.480874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:21:42.760 [2024-11-17 04:27:28.480882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.760 [2024-11-17 04:27:28.480964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.760 [2024-11-17 04:27:28.480975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:42.760 [2024-11-17 04:27:28.480983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:42.760 [2024-11-17 04:27:28.480991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.760 [2024-11-17 04:27:28.481088] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:42.760 [2024-11-17 04:27:28.481099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:42.760 [2024-11-17 04:27:28.481109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:42.760 [2024-11-17 04:27:28.481117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.760 [2024-11-17 04:27:28.481129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:42.760 [2024-11-17 04:27:28.481143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:42.760 [2024-11-17 04:27:28.481152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:42.760 [2024-11-17 04:27:28.481163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:42.760 [2024-11-17 04:27:28.481171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:42.760 [2024-11-17 04:27:28.481179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:42.760 [2024-11-17 04:27:28.481187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:42.760 [2024-11-17 04:27:28.481196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:42.760 [2024-11-17 04:27:28.481204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:42.760 [2024-11-17 04:27:28.481212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:42.760 [2024-11-17 04:27:28.481220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:42.760 [2024-11-17 04:27:28.481229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.760 [2024-11-17 04:27:28.481237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:42.760 [2024-11-17 04:27:28.481245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:42.760 [2024-11-17 04:27:28.481253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.760 [2024-11-17 04:27:28.481260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:42.760 [2024-11-17 04:27:28.481271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:42.760 [2024-11-17 04:27:28.481279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.760 [2024-11-17 04:27:28.481287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:42.760 [2024-11-17 04:27:28.481295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:42.760 [2024-11-17 04:27:28.481304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.760 [2024-11-17 04:27:28.481312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:42.760 [2024-11-17 04:27:28.481321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:42.760 [2024-11-17 04:27:28.481329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.760 [2024-11-17 04:27:28.481337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:42.760 [2024-11-17 04:27:28.481344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:42.760 [2024-11-17 04:27:28.481352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.760 [2024-11-17 04:27:28.481360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:42.760 [2024-11-17 04:27:28.481367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:42.760 [2024-11-17 04:27:28.481650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:42.760 [2024-11-17 04:27:28.481675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:42.760 [2024-11-17 04:27:28.481694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:42.760 [2024-11-17 04:27:28.481731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:42.760 [2024-11-17 04:27:28.481752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:42.760 [2024-11-17 04:27:28.481771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:42.760 [2024-11-17 04:27:28.481789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.760 [2024-11-17 04:27:28.481807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:42.760 [2024-11-17 04:27:28.481825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:42.760 [2024-11-17 04:27:28.481842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.760 [2024-11-17 04:27:28.481860] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:42.760 [2024-11-17 04:27:28.481886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:42.760 [2024-11-17 04:27:28.481977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:42.760 [2024-11-17 04:27:28.482009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.760 [2024-11-17 04:27:28.482029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:42.760 [2024-11-17 04:27:28.482049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:42.760 [2024-11-17 04:27:28.482067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:42.760 [2024-11-17 04:27:28.482086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:42.760 [2024-11-17 04:27:28.482104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:42.760 [2024-11-17 04:27:28.482124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:42.760 [2024-11-17 04:27:28.482146] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:42.760 [2024-11-17 04:27:28.482177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:42.760 [2024-11-17 04:27:28.482206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:42.760 [2024-11-17 04:27:28.482235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:42.760 [2024-11-17 04:27:28.482308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:42.760 [2024-11-17 04:27:28.482338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:42.760 [2024-11-17 04:27:28.482460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:42.760 [2024-11-17 04:27:28.482492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:42.760 [2024-11-17 04:27:28.482550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:42.760 [2024-11-17 04:27:28.482583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:42.760 [2024-11-17 04:27:28.482612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:42.760 [2024-11-17 04:27:28.482647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:42.760 [2024-11-17 04:27:28.482676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:42.760 [2024-11-17 04:27:28.482704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:42.760 [2024-11-17 04:27:28.482732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:42.760 [2024-11-17 04:27:28.482767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:42.760 [2024-11-17 04:27:28.482834] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:42.760 [2024-11-17 04:27:28.482868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:42.760 [2024-11-17 04:27:28.482905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:42.760 [2024-11-17 04:27:28.482914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:42.760 [2024-11-17 04:27:28.482922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:42.760 [2024-11-17 04:27:28.482930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:42.760 [2024-11-17 04:27:28.482941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.760 [2024-11-17 04:27:28.482954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:42.760 [2024-11-17 04:27:28.482964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.921 ms 00:21:42.760 [2024-11-17 04:27:28.482972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.021 [2024-11-17 04:27:28.497131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.021 [2024-11-17 04:27:28.497185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:43.021 [2024-11-17 04:27:28.497198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.078 ms 00:21:43.021 [2024-11-17 04:27:28.497206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.021 [2024-11-17 04:27:28.497297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.021 [2024-11-17 04:27:28.497307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:43.021 [2024-11-17 04:27:28.497316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:43.021 [2024-11-17 04:27:28.497335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.021 [2024-11-17 04:27:28.520814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.021 [2024-11-17 04:27:28.520885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:43.021 [2024-11-17 04:27:28.520907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.394 ms 00:21:43.021 [2024-11-17 04:27:28.520917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.021 [2024-11-17 04:27:28.520972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.021 [2024-11-17 04:27:28.520985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:43.021 [2024-11-17 04:27:28.520997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:43.021 [2024-11-17 04:27:28.521007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.521661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.521709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:43.022 [2024-11-17 04:27:28.521723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:21:43.022 [2024-11-17 04:27:28.521734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.521924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.521936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:43.022 [2024-11-17 04:27:28.521946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:21:43.022 [2024-11-17 04:27:28.521967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.530039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.530093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:43.022 [2024-11-17 04:27:28.530111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.046 ms 00:21:43.022 [2024-11-17 04:27:28.530119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.534061] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:43.022 [2024-11-17 04:27:28.534250] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:43.022 [2024-11-17 04:27:28.534269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.534278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:43.022 [2024-11-17 04:27:28.534287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.059 ms 00:21:43.022 [2024-11-17 04:27:28.534295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.550209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.550270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:43.022 [2024-11-17 04:27:28.550283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.870 ms 00:21:43.022 [2024-11-17 04:27:28.550291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.553185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.553354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:43.022 [2024-11-17 04:27:28.553391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.838 ms 00:21:43.022 [2024-11-17 04:27:28.553406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.556274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.556322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:43.022 [2024-11-17 04:27:28.556333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.829 ms 00:21:43.022 [2024-11-17 04:27:28.556341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.556835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.556964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:43.022 [2024-11-17 04:27:28.556982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:21:43.022 [2024-11-17 04:27:28.556999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.580079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.580307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:43.022 [2024-11-17 04:27:28.580340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.047 ms 00:21:43.022 [2024-11-17 04:27:28.580351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.588649] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:43.022 [2024-11-17 04:27:28.591837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.591984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:43.022 [2024-11-17 04:27:28.592040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.404 ms 00:21:43.022 [2024-11-17 04:27:28.592064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.592164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.592192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:43.022 [2024-11-17 04:27:28.592213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:43.022 [2024-11-17 04:27:28.592239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.594107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.594256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:43.022 [2024-11-17 04:27:28.594320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.816 ms 00:21:43.022 [2024-11-17 04:27:28.594344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.594407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.594431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:43.022 [2024-11-17 04:27:28.594453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:43.022 [2024-11-17 04:27:28.594476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.594526] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:43.022 [2024-11-17 04:27:28.594549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.594635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:43.022 [2024-11-17 04:27:28.594660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:43.022 [2024-11-17 04:27:28.594685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.600515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.600666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:43.022 [2024-11-17 04:27:28.600721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.792 ms 00:21:43.022 [2024-11-17 04:27:28.600743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.600833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.022 [2024-11-17 04:27:28.600858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:43.022 [2024-11-17 04:27:28.600888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:43.022 [2024-11-17 04:27:28.600911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.022 [2024-11-17 04:27:28.602742] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 138.868 ms, result 0 00:21:44.404  [2024-11-17T04:27:31.073Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-17T04:27:32.018Z] Copying: 36/1024 [MB] (20 MBps) [2024-11-17T04:27:32.963Z] Copying: 54/1024 [MB] (17 MBps) [2024-11-17T04:27:33.907Z] Copying: 71/1024 [MB] (17 MBps) [2024-11-17T04:27:34.852Z] Copying: 91/1024 [MB] (20 MBps) [2024-11-17T04:27:35.795Z] Copying: 110/1024 [MB] (18 MBps) [2024-11-17T04:27:37.183Z] Copying: 134/1024 [MB] (23 MBps) [2024-11-17T04:27:38.127Z] Copying: 148/1024 [MB] (14 MBps) [2024-11-17T04:27:39.071Z] Copying: 178/1024 [MB] (29 MBps) [2024-11-17T04:27:40.013Z] Copying: 199/1024 [MB] (21 MBps) [2024-11-17T04:27:40.956Z] Copying: 217/1024 [MB] (17 MBps) [2024-11-17T04:27:41.899Z] Copying: 240/1024 [MB] (22 MBps) [2024-11-17T04:27:42.843Z] Copying: 256/1024 [MB] (16 MBps) [2024-11-17T04:27:43.878Z] Copying: 266/1024 [MB] (10 MBps) [2024-11-17T04:27:44.824Z] Copying: 285/1024 [MB] (18 MBps) [2024-11-17T04:27:46.213Z] Copying: 305/1024 [MB] (19 MBps) [2024-11-17T04:27:47.161Z] Copying: 318/1024 [MB] (13 MBps) [2024-11-17T04:27:48.107Z] Copying: 332/1024 [MB] (13 MBps) [2024-11-17T04:27:49.052Z] Copying: 352/1024 [MB] (20 MBps) [2024-11-17T04:27:50.012Z] Copying: 369/1024 [MB] (16 MBps) [2024-11-17T04:27:50.958Z] Copying: 385/1024 [MB] (15 MBps) [2024-11-17T04:27:51.915Z] Copying: 401/1024 [MB] (15 MBps) [2024-11-17T04:27:52.859Z] Copying: 417/1024 [MB] (16 MBps) [2024-11-17T04:27:53.802Z] Copying: 429/1024 [MB] (12 MBps) [2024-11-17T04:27:55.192Z] Copying: 445/1024 [MB] (15 MBps) [2024-11-17T04:27:56.135Z] Copying: 458/1024 [MB] (12 MBps) [2024-11-17T04:27:57.080Z] Copying: 471/1024 [MB] (12 MBps) [2024-11-17T04:27:58.026Z] Copying: 482/1024 [MB] (11 MBps) [2024-11-17T04:27:58.971Z] Copying: 493/1024 [MB] (10 MBps) [2024-11-17T04:27:59.915Z] Copying: 505/1024 [MB] (12 MBps) [2024-11-17T04:28:00.858Z] Copying: 516/1024 [MB] (11 MBps) [2024-11-17T04:28:01.801Z] Copying: 532/1024 [MB] (15 MBps) [2024-11-17T04:28:03.189Z] Copying: 544/1024 [MB] (12 MBps) [2024-11-17T04:28:04.136Z] Copying: 558/1024 [MB] (14 MBps) [2024-11-17T04:28:05.080Z] Copying: 570/1024 [MB] (11 MBps) [2024-11-17T04:28:06.024Z] Copying: 582/1024 [MB] (11 MBps) [2024-11-17T04:28:06.969Z] Copying: 593/1024 [MB] (11 MBps) [2024-11-17T04:28:07.914Z] Copying: 605/1024 [MB] (11 MBps) [2024-11-17T04:28:08.859Z] Copying: 617/1024 [MB] (11 MBps) [2024-11-17T04:28:09.799Z] Copying: 629/1024 [MB] (12 MBps) [2024-11-17T04:28:11.183Z] Copying: 641/1024 [MB] (11 MBps) [2024-11-17T04:28:12.149Z] Copying: 658/1024 [MB] (17 MBps) [2024-11-17T04:28:13.124Z] Copying: 670/1024 [MB] (12 MBps) [2024-11-17T04:28:14.066Z] Copying: 683/1024 [MB] (12 MBps) [2024-11-17T04:28:15.008Z] Copying: 698/1024 [MB] (14 MBps) [2024-11-17T04:28:15.953Z] Copying: 712/1024 [MB] (13 MBps) [2024-11-17T04:28:16.897Z] Copying: 725/1024 [MB] (13 MBps) [2024-11-17T04:28:17.841Z] Copying: 736/1024 [MB] (11 MBps) [2024-11-17T04:28:19.228Z] Copying: 747/1024 [MB] (11 MBps) [2024-11-17T04:28:19.798Z] Copying: 758/1024 [MB] (10 MBps) [2024-11-17T04:28:21.186Z] Copying: 770/1024 [MB] (11 MBps) [2024-11-17T04:28:22.128Z] Copying: 784/1024 [MB] (14 MBps) [2024-11-17T04:28:23.071Z] Copying: 803/1024 [MB] (18 MBps) [2024-11-17T04:28:24.005Z] Copying: 819/1024 [MB] (15 MBps) [2024-11-17T04:28:24.939Z] Copying: 852/1024 [MB] (33 MBps) [2024-11-17T04:28:25.874Z] Copying: 894/1024 [MB] (41 MBps) [2024-11-17T04:28:26.819Z] Copying: 929/1024 [MB] (34 MBps) [2024-11-17T04:28:28.210Z] Copying: 947/1024 [MB] (18 MBps) [2024-11-17T04:28:29.154Z] Copying: 959/1024 [MB] (11 MBps) [2024-11-17T04:28:30.094Z] Copying: 976/1024 [MB] (17 MBps) [2024-11-17T04:28:31.035Z] Copying: 992/1024 [MB] (16 MBps) [2024-11-17T04:28:31.978Z] Copying: 1010/1024 [MB] (18 MBps) [2024-11-17T04:28:31.978Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-17 04:28:31.847137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.251 [2024-11-17 04:28:31.847231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:46.251 [2024-11-17 04:28:31.847253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:46.251 [2024-11-17 04:28:31.847265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.251 [2024-11-17 04:28:31.847294] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:46.251 [2024-11-17 04:28:31.848294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.251 [2024-11-17 04:28:31.848341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:46.251 [2024-11-17 04:28:31.848356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:22:46.251 [2024-11-17 04:28:31.848395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.251 [2024-11-17 04:28:31.848697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.251 [2024-11-17 04:28:31.848714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:46.251 [2024-11-17 04:28:31.848726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:22:46.251 [2024-11-17 04:28:31.848736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.251 [2024-11-17 04:28:31.854765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.251 [2024-11-17 04:28:31.854828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:46.251 [2024-11-17 04:28:31.854841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.008 ms 00:22:46.251 [2024-11-17 04:28:31.854849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.251 [2024-11-17 04:28:31.862450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.251 [2024-11-17 04:28:31.862498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:46.251 [2024-11-17 04:28:31.862509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.553 ms 00:22:46.251 [2024-11-17 04:28:31.862525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.251 [2024-11-17 04:28:31.865061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.251 [2024-11-17 04:28:31.865113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:46.251 [2024-11-17 04:28:31.865124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.466 ms 00:22:46.251 [2024-11-17 04:28:31.865133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.252 [2024-11-17 04:28:31.871926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.252 [2024-11-17 04:28:31.872071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:46.252 [2024-11-17 04:28:31.872106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.735 ms 00:22:46.252 [2024-11-17 04:28:31.872129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.824 [2024-11-17 04:28:32.246027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.825 [2024-11-17 04:28:32.246132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:46.825 [2024-11-17 04:28:32.246151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 373.773 ms 00:22:46.825 [2024-11-17 04:28:32.246161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.825 [2024-11-17 04:28:32.249785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.825 [2024-11-17 04:28:32.249855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:46.825 [2024-11-17 04:28:32.249867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.604 ms 00:22:46.825 [2024-11-17 04:28:32.249876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.825 [2024-11-17 04:28:32.252798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.825 [2024-11-17 04:28:32.252848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:46.825 [2024-11-17 04:28:32.252859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.875 ms 00:22:46.825 [2024-11-17 04:28:32.252867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.825 [2024-11-17 04:28:32.255325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.825 [2024-11-17 04:28:32.255395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:46.825 [2024-11-17 04:28:32.255407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.412 ms 00:22:46.825 [2024-11-17 04:28:32.255414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.825 [2024-11-17 04:28:32.257771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.825 [2024-11-17 04:28:32.257824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:46.825 [2024-11-17 04:28:32.257835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.280 ms 00:22:46.825 [2024-11-17 04:28:32.257842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.825 [2024-11-17 04:28:32.257882] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:46.825 [2024-11-17 04:28:32.257898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:46.825 [2024-11-17 04:28:32.257911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.257920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.257928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.257937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.257945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.257953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.257962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.257970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.257979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.257986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.257995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:46.825 [2024-11-17 04:28:32.258305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:46.826 [2024-11-17 04:28:32.258730] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:46.826 [2024-11-17 04:28:32.258739] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 56f46c39-64c9-4241-8388-8d11dc80ea47 00:22:46.826 [2024-11-17 04:28:32.258749] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:46.826 [2024-11-17 04:28:32.258756] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 18112 00:22:46.826 [2024-11-17 04:28:32.258780] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 17152 00:22:46.826 [2024-11-17 04:28:32.258788] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0560 00:22:46.826 [2024-11-17 04:28:32.258796] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:46.826 [2024-11-17 04:28:32.258805] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:46.826 [2024-11-17 04:28:32.258820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:46.826 [2024-11-17 04:28:32.258827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:46.826 [2024-11-17 04:28:32.258835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:46.826 [2024-11-17 04:28:32.258843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.826 [2024-11-17 04:28:32.258856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:46.826 [2024-11-17 04:28:32.258866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:22:46.826 [2024-11-17 04:28:32.258873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.826 [2024-11-17 04:28:32.261311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.826 [2024-11-17 04:28:32.261353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:46.826 [2024-11-17 04:28:32.261365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.416 ms 00:22:46.826 [2024-11-17 04:28:32.261391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.826 [2024-11-17 04:28:32.261521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.826 [2024-11-17 04:28:32.261531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:46.826 [2024-11-17 04:28:32.261540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:22:46.826 [2024-11-17 04:28:32.261555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.826 [2024-11-17 04:28:32.269341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.826 [2024-11-17 04:28:32.269407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:46.826 [2024-11-17 04:28:32.269420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.826 [2024-11-17 04:28:32.269429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.826 [2024-11-17 04:28:32.269493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.826 [2024-11-17 04:28:32.269503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:46.826 [2024-11-17 04:28:32.269512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.826 [2024-11-17 04:28:32.269520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.826 [2024-11-17 04:28:32.269595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.826 [2024-11-17 04:28:32.269606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:46.826 [2024-11-17 04:28:32.269615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.826 [2024-11-17 04:28:32.269628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.827 [2024-11-17 04:28:32.269647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.827 [2024-11-17 04:28:32.269656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:46.827 [2024-11-17 04:28:32.269666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.827 [2024-11-17 04:28:32.269674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.827 [2024-11-17 04:28:32.283554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.827 [2024-11-17 04:28:32.283609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:46.827 [2024-11-17 04:28:32.283622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.827 [2024-11-17 04:28:32.283631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.827 [2024-11-17 04:28:32.293880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.827 [2024-11-17 04:28:32.293945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:46.827 [2024-11-17 04:28:32.293957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.827 [2024-11-17 04:28:32.293965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.827 [2024-11-17 04:28:32.294014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.827 [2024-11-17 04:28:32.294028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:46.827 [2024-11-17 04:28:32.294040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.827 [2024-11-17 04:28:32.294048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.827 [2024-11-17 04:28:32.294084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.827 [2024-11-17 04:28:32.294094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:46.827 [2024-11-17 04:28:32.294102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.827 [2024-11-17 04:28:32.294111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.827 [2024-11-17 04:28:32.294179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.827 [2024-11-17 04:28:32.294189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:46.827 [2024-11-17 04:28:32.294200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.827 [2024-11-17 04:28:32.294208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.827 [2024-11-17 04:28:32.294236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.827 [2024-11-17 04:28:32.294246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:46.827 [2024-11-17 04:28:32.294254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.827 [2024-11-17 04:28:32.294262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.827 [2024-11-17 04:28:32.294302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.827 [2024-11-17 04:28:32.294312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:46.827 [2024-11-17 04:28:32.294323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.827 [2024-11-17 04:28:32.294331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.827 [2024-11-17 04:28:32.294394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.827 [2024-11-17 04:28:32.294405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:46.827 [2024-11-17 04:28:32.294415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.827 [2024-11-17 04:28:32.294422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.827 [2024-11-17 04:28:32.294555] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 447.393 ms, result 0 00:22:46.827 00:22:46.827 00:22:46.827 04:28:32 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:49.375 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:49.375 04:28:34 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:49.375 04:28:34 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:49.375 04:28:34 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:49.375 04:28:34 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:49.375 04:28:34 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:49.375 04:28:34 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 85851 00:22:49.375 Process with pid 85851 is not found 00:22:49.375 Remove shared memory files 00:22:49.375 04:28:34 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 85851 ']' 00:22:49.375 04:28:34 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 85851 00:22:49.375 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85851) - No such process 00:22:49.375 04:28:34 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 85851 is not found' 00:22:49.375 04:28:34 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:49.375 04:28:34 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:49.375 04:28:34 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:49.375 04:28:34 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:49.376 04:28:34 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:49.376 04:28:34 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:49.376 04:28:34 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:49.376 00:22:49.376 real 4m47.090s 00:22:49.376 user 4m33.012s 00:22:49.376 sys 0m13.194s 00:22:49.376 04:28:34 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.376 ************************************ 00:22:49.376 END TEST ftl_restore 00:22:49.376 ************************************ 00:22:49.376 04:28:34 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:49.376 04:28:34 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:49.376 04:28:34 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:49.376 04:28:34 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.376 04:28:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:49.376 ************************************ 00:22:49.376 START TEST ftl_dirty_shutdown 00:22:49.376 ************************************ 00:22:49.376 04:28:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:49.376 * Looking for test storage... 00:22:49.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:49.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.376 --rc genhtml_branch_coverage=1 00:22:49.376 --rc genhtml_function_coverage=1 00:22:49.376 --rc genhtml_legend=1 00:22:49.376 --rc geninfo_all_blocks=1 00:22:49.376 --rc geninfo_unexecuted_blocks=1 00:22:49.376 00:22:49.376 ' 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:49.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.376 --rc genhtml_branch_coverage=1 00:22:49.376 --rc genhtml_function_coverage=1 00:22:49.376 --rc genhtml_legend=1 00:22:49.376 --rc geninfo_all_blocks=1 00:22:49.376 --rc geninfo_unexecuted_blocks=1 00:22:49.376 00:22:49.376 ' 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:49.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.376 --rc genhtml_branch_coverage=1 00:22:49.376 --rc genhtml_function_coverage=1 00:22:49.376 --rc genhtml_legend=1 00:22:49.376 --rc geninfo_all_blocks=1 00:22:49.376 --rc geninfo_unexecuted_blocks=1 00:22:49.376 00:22:49.376 ' 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:49.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.376 --rc genhtml_branch_coverage=1 00:22:49.376 --rc genhtml_function_coverage=1 00:22:49.376 --rc genhtml_legend=1 00:22:49.376 --rc geninfo_all_blocks=1 00:22:49.376 --rc geninfo_unexecuted_blocks=1 00:22:49.376 00:22:49.376 ' 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:49.376 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:49.637 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:49.637 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:49.637 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:49.637 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:49.637 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:49.637 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:49.637 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:49.637 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:49.637 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:49.637 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:49.637 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:49.637 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:49.637 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:49.637 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=88929 00:22:49.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 88929 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 88929 ']' 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.638 04:28:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:49.638 [2024-11-17 04:28:35.195326] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:22:49.638 [2024-11-17 04:28:35.195758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88929 ] 00:22:49.638 [2024-11-17 04:28:35.359213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.899 [2024-11-17 04:28:35.392600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.471 04:28:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.471 04:28:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:22:50.471 04:28:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:50.471 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:50.471 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:50.471 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:50.471 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:50.471 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:50.733 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:50.733 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:50.733 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:50.733 04:28:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:50.733 04:28:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:50.733 04:28:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:50.733 04:28:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:50.733 04:28:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:50.992 04:28:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:50.992 { 00:22:50.992 "name": "nvme0n1", 00:22:50.992 "aliases": [ 00:22:50.992 "e8ba1e2e-f5bd-4c29-82da-dd3cd6a37098" 00:22:50.992 ], 00:22:50.992 "product_name": "NVMe disk", 00:22:50.992 "block_size": 4096, 00:22:50.992 "num_blocks": 1310720, 00:22:50.992 "uuid": "e8ba1e2e-f5bd-4c29-82da-dd3cd6a37098", 00:22:50.992 "numa_id": -1, 00:22:50.992 "assigned_rate_limits": { 00:22:50.992 "rw_ios_per_sec": 0, 00:22:50.992 "rw_mbytes_per_sec": 0, 00:22:50.992 "r_mbytes_per_sec": 0, 00:22:50.992 "w_mbytes_per_sec": 0 00:22:50.992 }, 00:22:50.992 "claimed": true, 00:22:50.992 "claim_type": "read_many_write_one", 00:22:50.992 "zoned": false, 00:22:50.992 "supported_io_types": { 00:22:50.992 "read": true, 00:22:50.992 "write": true, 00:22:50.992 "unmap": true, 00:22:50.992 "flush": true, 00:22:50.992 "reset": true, 00:22:50.992 "nvme_admin": true, 00:22:50.992 "nvme_io": true, 00:22:50.992 "nvme_io_md": false, 00:22:50.992 "write_zeroes": true, 00:22:50.992 "zcopy": false, 00:22:50.992 "get_zone_info": false, 00:22:50.992 "zone_management": false, 00:22:50.992 "zone_append": false, 00:22:50.992 "compare": true, 00:22:50.992 "compare_and_write": false, 00:22:50.992 "abort": true, 00:22:50.992 "seek_hole": false, 00:22:50.992 "seek_data": false, 00:22:50.992 "copy": true, 00:22:50.992 "nvme_iov_md": false 00:22:50.992 }, 00:22:50.992 "driver_specific": { 00:22:50.992 "nvme": [ 00:22:50.992 { 00:22:50.992 "pci_address": "0000:00:11.0", 00:22:50.992 "trid": { 00:22:50.992 "trtype": "PCIe", 00:22:50.992 "traddr": "0000:00:11.0" 00:22:50.992 }, 00:22:50.992 "ctrlr_data": { 00:22:50.992 "cntlid": 0, 00:22:50.992 "vendor_id": "0x1b36", 00:22:50.992 "model_number": "QEMU NVMe Ctrl", 00:22:50.992 "serial_number": "12341", 00:22:50.992 "firmware_revision": "8.0.0", 00:22:50.992 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:50.992 "oacs": { 00:22:50.992 "security": 0, 00:22:50.992 "format": 1, 00:22:50.992 "firmware": 0, 00:22:50.992 "ns_manage": 1 00:22:50.992 }, 00:22:50.992 "multi_ctrlr": false, 00:22:50.992 "ana_reporting": false 00:22:50.992 }, 00:22:50.992 "vs": { 00:22:50.992 "nvme_version": "1.4" 00:22:50.992 }, 00:22:50.992 "ns_data": { 00:22:50.992 "id": 1, 00:22:50.992 "can_share": false 00:22:50.992 } 00:22:50.992 } 00:22:50.992 ], 00:22:50.992 "mp_policy": "active_passive" 00:22:50.992 } 00:22:50.992 } 00:22:50.992 ]' 00:22:50.992 04:28:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:50.992 04:28:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:50.992 04:28:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:50.992 04:28:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:50.992 04:28:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:50.992 04:28:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:22:50.992 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:50.992 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:50.992 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:50.992 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:50.992 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:51.253 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=9649f2db-73b9-402e-a314-574cae5aa724 00:22:51.253 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:51.253 04:28:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9649f2db-73b9-402e-a314-574cae5aa724 00:22:51.514 04:28:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:51.775 04:28:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=9c5670b4-e276-4e77-b33e-5a1ab3e125d7 00:22:51.775 04:28:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9c5670b4-e276-4e77-b33e-5a1ab3e125d7 00:22:51.775 04:28:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=df2e7fd8-85b6-4587-b9f0-b37a2610f9e9 00:22:51.775 04:28:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:51.776 04:28:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 df2e7fd8-85b6-4587-b9f0-b37a2610f9e9 00:22:51.776 04:28:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:51.776 04:28:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:51.776 04:28:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=df2e7fd8-85b6-4587-b9f0-b37a2610f9e9 00:22:51.776 04:28:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:51.776 04:28:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size df2e7fd8-85b6-4587-b9f0-b37a2610f9e9 00:22:51.776 04:28:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=df2e7fd8-85b6-4587-b9f0-b37a2610f9e9 00:22:51.776 04:28:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:51.776 04:28:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:51.776 04:28:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:51.776 04:28:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b df2e7fd8-85b6-4587-b9f0-b37a2610f9e9 00:22:52.035 04:28:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:52.035 { 00:22:52.035 "name": "df2e7fd8-85b6-4587-b9f0-b37a2610f9e9", 00:22:52.035 "aliases": [ 00:22:52.035 "lvs/nvme0n1p0" 00:22:52.035 ], 00:22:52.035 "product_name": "Logical Volume", 00:22:52.035 "block_size": 4096, 00:22:52.035 "num_blocks": 26476544, 00:22:52.035 "uuid": "df2e7fd8-85b6-4587-b9f0-b37a2610f9e9", 00:22:52.035 "assigned_rate_limits": { 00:22:52.035 "rw_ios_per_sec": 0, 00:22:52.035 "rw_mbytes_per_sec": 0, 00:22:52.035 "r_mbytes_per_sec": 0, 00:22:52.035 "w_mbytes_per_sec": 0 00:22:52.035 }, 00:22:52.035 "claimed": false, 00:22:52.035 "zoned": false, 00:22:52.035 "supported_io_types": { 00:22:52.035 "read": true, 00:22:52.035 "write": true, 00:22:52.035 "unmap": true, 00:22:52.035 "flush": false, 00:22:52.035 "reset": true, 00:22:52.035 "nvme_admin": false, 00:22:52.035 "nvme_io": false, 00:22:52.035 "nvme_io_md": false, 00:22:52.035 "write_zeroes": true, 00:22:52.035 "zcopy": false, 00:22:52.035 "get_zone_info": false, 00:22:52.035 "zone_management": false, 00:22:52.035 "zone_append": false, 00:22:52.035 "compare": false, 00:22:52.035 "compare_and_write": false, 00:22:52.035 "abort": false, 00:22:52.035 "seek_hole": true, 00:22:52.035 "seek_data": true, 00:22:52.035 "copy": false, 00:22:52.035 "nvme_iov_md": false 00:22:52.035 }, 00:22:52.035 "driver_specific": { 00:22:52.035 "lvol": { 00:22:52.035 "lvol_store_uuid": "9c5670b4-e276-4e77-b33e-5a1ab3e125d7", 00:22:52.035 "base_bdev": "nvme0n1", 00:22:52.035 "thin_provision": true, 00:22:52.035 "num_allocated_clusters": 0, 00:22:52.035 "snapshot": false, 00:22:52.035 "clone": false, 00:22:52.035 "esnap_clone": false 00:22:52.035 } 00:22:52.035 } 00:22:52.035 } 00:22:52.035 ]' 00:22:52.035 04:28:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:52.296 04:28:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:52.296 04:28:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:52.296 04:28:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:52.296 04:28:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:52.296 04:28:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:52.296 04:28:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:52.296 04:28:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:52.296 04:28:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:52.555 04:28:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:52.555 04:28:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:52.556 04:28:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size df2e7fd8-85b6-4587-b9f0-b37a2610f9e9 00:22:52.556 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=df2e7fd8-85b6-4587-b9f0-b37a2610f9e9 00:22:52.556 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:52.556 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:52.556 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:52.556 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b df2e7fd8-85b6-4587-b9f0-b37a2610f9e9 00:22:52.556 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:52.556 { 00:22:52.556 "name": "df2e7fd8-85b6-4587-b9f0-b37a2610f9e9", 00:22:52.556 "aliases": [ 00:22:52.556 "lvs/nvme0n1p0" 00:22:52.556 ], 00:22:52.556 "product_name": "Logical Volume", 00:22:52.556 "block_size": 4096, 00:22:52.556 "num_blocks": 26476544, 00:22:52.556 "uuid": "df2e7fd8-85b6-4587-b9f0-b37a2610f9e9", 00:22:52.556 "assigned_rate_limits": { 00:22:52.556 "rw_ios_per_sec": 0, 00:22:52.556 "rw_mbytes_per_sec": 0, 00:22:52.556 "r_mbytes_per_sec": 0, 00:22:52.556 "w_mbytes_per_sec": 0 00:22:52.556 }, 00:22:52.556 "claimed": false, 00:22:52.556 "zoned": false, 00:22:52.556 "supported_io_types": { 00:22:52.556 "read": true, 00:22:52.556 "write": true, 00:22:52.556 "unmap": true, 00:22:52.556 "flush": false, 00:22:52.556 "reset": true, 00:22:52.556 "nvme_admin": false, 00:22:52.556 "nvme_io": false, 00:22:52.556 "nvme_io_md": false, 00:22:52.556 "write_zeroes": true, 00:22:52.556 "zcopy": false, 00:22:52.556 "get_zone_info": false, 00:22:52.556 "zone_management": false, 00:22:52.556 "zone_append": false, 00:22:52.556 "compare": false, 00:22:52.556 "compare_and_write": false, 00:22:52.556 "abort": false, 00:22:52.556 "seek_hole": true, 00:22:52.556 "seek_data": true, 00:22:52.556 "copy": false, 00:22:52.556 "nvme_iov_md": false 00:22:52.556 }, 00:22:52.556 "driver_specific": { 00:22:52.556 "lvol": { 00:22:52.556 "lvol_store_uuid": "9c5670b4-e276-4e77-b33e-5a1ab3e125d7", 00:22:52.556 "base_bdev": "nvme0n1", 00:22:52.556 "thin_provision": true, 00:22:52.556 "num_allocated_clusters": 0, 00:22:52.556 "snapshot": false, 00:22:52.556 "clone": false, 00:22:52.556 "esnap_clone": false 00:22:52.556 } 00:22:52.556 } 00:22:52.556 } 00:22:52.556 ]' 00:22:52.556 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:52.814 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:52.814 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:52.814 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:52.814 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:52.814 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:52.814 04:28:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:52.814 04:28:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:52.814 04:28:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:52.814 04:28:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size df2e7fd8-85b6-4587-b9f0-b37a2610f9e9 00:22:52.814 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=df2e7fd8-85b6-4587-b9f0-b37a2610f9e9 00:22:52.814 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:52.814 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:52.814 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:52.814 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b df2e7fd8-85b6-4587-b9f0-b37a2610f9e9 00:22:53.072 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:53.072 { 00:22:53.072 "name": "df2e7fd8-85b6-4587-b9f0-b37a2610f9e9", 00:22:53.072 "aliases": [ 00:22:53.072 "lvs/nvme0n1p0" 00:22:53.072 ], 00:22:53.072 "product_name": "Logical Volume", 00:22:53.072 "block_size": 4096, 00:22:53.072 "num_blocks": 26476544, 00:22:53.072 "uuid": "df2e7fd8-85b6-4587-b9f0-b37a2610f9e9", 00:22:53.072 "assigned_rate_limits": { 00:22:53.072 "rw_ios_per_sec": 0, 00:22:53.072 "rw_mbytes_per_sec": 0, 00:22:53.072 "r_mbytes_per_sec": 0, 00:22:53.072 "w_mbytes_per_sec": 0 00:22:53.072 }, 00:22:53.072 "claimed": false, 00:22:53.072 "zoned": false, 00:22:53.072 "supported_io_types": { 00:22:53.072 "read": true, 00:22:53.072 "write": true, 00:22:53.072 "unmap": true, 00:22:53.072 "flush": false, 00:22:53.072 "reset": true, 00:22:53.072 "nvme_admin": false, 00:22:53.072 "nvme_io": false, 00:22:53.072 "nvme_io_md": false, 00:22:53.072 "write_zeroes": true, 00:22:53.072 "zcopy": false, 00:22:53.072 "get_zone_info": false, 00:22:53.072 "zone_management": false, 00:22:53.072 "zone_append": false, 00:22:53.072 "compare": false, 00:22:53.072 "compare_and_write": false, 00:22:53.072 "abort": false, 00:22:53.072 "seek_hole": true, 00:22:53.072 "seek_data": true, 00:22:53.072 "copy": false, 00:22:53.072 "nvme_iov_md": false 00:22:53.072 }, 00:22:53.072 "driver_specific": { 00:22:53.072 "lvol": { 00:22:53.072 "lvol_store_uuid": "9c5670b4-e276-4e77-b33e-5a1ab3e125d7", 00:22:53.072 "base_bdev": "nvme0n1", 00:22:53.072 "thin_provision": true, 00:22:53.072 "num_allocated_clusters": 0, 00:22:53.072 "snapshot": false, 00:22:53.072 "clone": false, 00:22:53.072 "esnap_clone": false 00:22:53.072 } 00:22:53.072 } 00:22:53.072 } 00:22:53.072 ]' 00:22:53.072 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:53.072 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:53.072 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:53.072 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:53.072 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:53.072 04:28:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:53.072 04:28:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:53.072 04:28:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d df2e7fd8-85b6-4587-b9f0-b37a2610f9e9 --l2p_dram_limit 10' 00:22:53.072 04:28:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:53.072 04:28:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:53.072 04:28:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:53.072 04:28:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d df2e7fd8-85b6-4587-b9f0-b37a2610f9e9 --l2p_dram_limit 10 -c nvc0n1p0 00:22:53.331 [2024-11-17 04:28:38.970043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.332 [2024-11-17 04:28:38.970084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:53.332 [2024-11-17 04:28:38.970095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:53.332 [2024-11-17 04:28:38.970102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.332 [2024-11-17 04:28:38.970141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.332 [2024-11-17 04:28:38.970152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:53.332 [2024-11-17 04:28:38.970160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:53.332 [2024-11-17 04:28:38.970169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.332 [2024-11-17 04:28:38.970185] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:53.332 [2024-11-17 04:28:38.970386] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:53.332 [2024-11-17 04:28:38.970402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.332 [2024-11-17 04:28:38.970409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:53.332 [2024-11-17 04:28:38.970415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:22:53.332 [2024-11-17 04:28:38.970422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.332 [2024-11-17 04:28:38.970445] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4e176443-4201-40c4-8cf1-bf8a3448302f 00:22:53.332 [2024-11-17 04:28:38.971420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.332 [2024-11-17 04:28:38.971442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:53.332 [2024-11-17 04:28:38.971454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:53.332 [2024-11-17 04:28:38.971461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.332 [2024-11-17 04:28:38.976214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.332 [2024-11-17 04:28:38.976245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:53.332 [2024-11-17 04:28:38.976253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.696 ms 00:22:53.332 [2024-11-17 04:28:38.976259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.332 [2024-11-17 04:28:38.976317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.332 [2024-11-17 04:28:38.976325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:53.332 [2024-11-17 04:28:38.976334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:53.332 [2024-11-17 04:28:38.976340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.332 [2024-11-17 04:28:38.976392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.332 [2024-11-17 04:28:38.976400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:53.332 [2024-11-17 04:28:38.976408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:53.332 [2024-11-17 04:28:38.976414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.332 [2024-11-17 04:28:38.976448] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:53.332 [2024-11-17 04:28:38.977717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.332 [2024-11-17 04:28:38.977742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:53.332 [2024-11-17 04:28:38.977753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.277 ms 00:22:53.332 [2024-11-17 04:28:38.977760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.332 [2024-11-17 04:28:38.977783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.332 [2024-11-17 04:28:38.977792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:53.332 [2024-11-17 04:28:38.977798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:53.332 [2024-11-17 04:28:38.977807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.332 [2024-11-17 04:28:38.977824] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:53.332 [2024-11-17 04:28:38.977929] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:53.332 [2024-11-17 04:28:38.977939] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:53.332 [2024-11-17 04:28:38.977953] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:53.332 [2024-11-17 04:28:38.977961] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:53.332 [2024-11-17 04:28:38.977974] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:53.332 [2024-11-17 04:28:38.977980] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:53.332 [2024-11-17 04:28:38.977991] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:53.332 [2024-11-17 04:28:38.977997] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:53.332 [2024-11-17 04:28:38.978003] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:53.332 [2024-11-17 04:28:38.978009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.332 [2024-11-17 04:28:38.978016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:53.332 [2024-11-17 04:28:38.978022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:22:53.332 [2024-11-17 04:28:38.978030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.332 [2024-11-17 04:28:38.978094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.332 [2024-11-17 04:28:38.978103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:53.332 [2024-11-17 04:28:38.978109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:53.332 [2024-11-17 04:28:38.978116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.332 [2024-11-17 04:28:38.978192] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:53.332 [2024-11-17 04:28:38.978205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:53.332 [2024-11-17 04:28:38.978210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:53.332 [2024-11-17 04:28:38.978219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.332 [2024-11-17 04:28:38.978225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:53.332 [2024-11-17 04:28:38.978231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:53.332 [2024-11-17 04:28:38.978237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:53.332 [2024-11-17 04:28:38.978244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:53.332 [2024-11-17 04:28:38.978249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:53.332 [2024-11-17 04:28:38.978255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:53.332 [2024-11-17 04:28:38.978260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:53.332 [2024-11-17 04:28:38.978267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:53.332 [2024-11-17 04:28:38.978272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:53.332 [2024-11-17 04:28:38.978281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:53.332 [2024-11-17 04:28:38.978286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:53.332 [2024-11-17 04:28:38.978293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.332 [2024-11-17 04:28:38.978298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:53.332 [2024-11-17 04:28:38.978304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:53.332 [2024-11-17 04:28:38.978309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.332 [2024-11-17 04:28:38.978316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:53.332 [2024-11-17 04:28:38.978321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:53.332 [2024-11-17 04:28:38.978327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.332 [2024-11-17 04:28:38.978332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:53.332 [2024-11-17 04:28:38.978339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:53.332 [2024-11-17 04:28:38.978343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.332 [2024-11-17 04:28:38.978349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:53.332 [2024-11-17 04:28:38.978354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:53.332 [2024-11-17 04:28:38.978361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.332 [2024-11-17 04:28:38.978366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:53.332 [2024-11-17 04:28:38.978389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:53.332 [2024-11-17 04:28:38.978396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.332 [2024-11-17 04:28:38.978403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:53.332 [2024-11-17 04:28:38.978409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:53.332 [2024-11-17 04:28:38.978416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:53.332 [2024-11-17 04:28:38.978422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:53.332 [2024-11-17 04:28:38.978429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:53.332 [2024-11-17 04:28:38.978434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:53.332 [2024-11-17 04:28:38.978441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:53.333 [2024-11-17 04:28:38.978447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:53.333 [2024-11-17 04:28:38.978455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.333 [2024-11-17 04:28:38.978461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:53.333 [2024-11-17 04:28:38.978468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:53.333 [2024-11-17 04:28:38.978474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.333 [2024-11-17 04:28:38.978481] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:53.333 [2024-11-17 04:28:38.978488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:53.333 [2024-11-17 04:28:38.978497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:53.333 [2024-11-17 04:28:38.978504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.333 [2024-11-17 04:28:38.978513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:53.333 [2024-11-17 04:28:38.978519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:53.333 [2024-11-17 04:28:38.978526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:53.333 [2024-11-17 04:28:38.978532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:53.333 [2024-11-17 04:28:38.978539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:53.333 [2024-11-17 04:28:38.978545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:53.333 [2024-11-17 04:28:38.978555] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:53.333 [2024-11-17 04:28:38.978565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:53.333 [2024-11-17 04:28:38.978574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:53.333 [2024-11-17 04:28:38.978581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:53.333 [2024-11-17 04:28:38.978589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:53.333 [2024-11-17 04:28:38.978595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:53.333 [2024-11-17 04:28:38.978602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:53.333 [2024-11-17 04:28:38.978609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:53.333 [2024-11-17 04:28:38.978618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:53.333 [2024-11-17 04:28:38.978624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:53.333 [2024-11-17 04:28:38.978631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:53.333 [2024-11-17 04:28:38.978637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:53.333 [2024-11-17 04:28:38.978645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:53.333 [2024-11-17 04:28:38.978651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:53.333 [2024-11-17 04:28:38.978658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:53.333 [2024-11-17 04:28:38.978664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:53.333 [2024-11-17 04:28:38.978671] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:53.333 [2024-11-17 04:28:38.978677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:53.333 [2024-11-17 04:28:38.978685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:53.333 [2024-11-17 04:28:38.978691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:53.333 [2024-11-17 04:28:38.978698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:53.333 [2024-11-17 04:28:38.978704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:53.333 [2024-11-17 04:28:38.978711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.333 [2024-11-17 04:28:38.978717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:53.333 [2024-11-17 04:28:38.978725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:22:53.333 [2024-11-17 04:28:38.978730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.333 [2024-11-17 04:28:38.978760] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:53.333 [2024-11-17 04:28:38.978767] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:57.592 [2024-11-17 04:28:42.717066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.717160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:57.592 [2024-11-17 04:28:42.717180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3738.280 ms 00:22:57.592 [2024-11-17 04:28:42.717190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.731332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.731405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:57.592 [2024-11-17 04:28:42.731429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.016 ms 00:22:57.592 [2024-11-17 04:28:42.731439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.731573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.731586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:57.592 [2024-11-17 04:28:42.731598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:57.592 [2024-11-17 04:28:42.731607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.743946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.744002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:57.592 [2024-11-17 04:28:42.744017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.293 ms 00:22:57.592 [2024-11-17 04:28:42.744025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.744066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.744075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:57.592 [2024-11-17 04:28:42.744085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:57.592 [2024-11-17 04:28:42.744093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.744760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.744784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:57.592 [2024-11-17 04:28:42.744798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:22:57.592 [2024-11-17 04:28:42.744807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.744931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.744945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:57.592 [2024-11-17 04:28:42.744956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:22:57.592 [2024-11-17 04:28:42.744965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.753057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.753103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:57.592 [2024-11-17 04:28:42.753116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.068 ms 00:22:57.592 [2024-11-17 04:28:42.753123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.762654] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:57.592 [2024-11-17 04:28:42.766241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.766287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:57.592 [2024-11-17 04:28:42.766299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.051 ms 00:22:57.592 [2024-11-17 04:28:42.766309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.852699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.853016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:57.592 [2024-11-17 04:28:42.853045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.358 ms 00:22:57.592 [2024-11-17 04:28:42.853068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.853277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.853291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:57.592 [2024-11-17 04:28:42.853301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:22:57.592 [2024-11-17 04:28:42.853316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.859618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.859824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:57.592 [2024-11-17 04:28:42.859844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.259 ms 00:22:57.592 [2024-11-17 04:28:42.859860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.865477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.865546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:57.592 [2024-11-17 04:28:42.865561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.570 ms 00:22:57.592 [2024-11-17 04:28:42.865571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.865920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.865942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:57.592 [2024-11-17 04:28:42.865951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:22:57.592 [2024-11-17 04:28:42.865964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.910686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.910909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:57.592 [2024-11-17 04:28:42.910933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.698 ms 00:22:57.592 [2024-11-17 04:28:42.910954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.918258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.918475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:57.592 [2024-11-17 04:28:42.918498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.226 ms 00:22:57.592 [2024-11-17 04:28:42.918509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.925730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.926101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:57.592 [2024-11-17 04:28:42.926340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.171 ms 00:22:57.592 [2024-11-17 04:28:42.926455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.936277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.936703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:57.592 [2024-11-17 04:28:42.936917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.165 ms 00:22:57.592 [2024-11-17 04:28:42.937001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.592 [2024-11-17 04:28:42.937200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.592 [2024-11-17 04:28:42.937262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:57.592 [2024-11-17 04:28:42.937286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:57.592 [2024-11-17 04:28:42.937352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.593 [2024-11-17 04:28:42.937505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.593 [2024-11-17 04:28:42.937536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:57.593 [2024-11-17 04:28:42.937558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:57.593 [2024-11-17 04:28:42.937640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.593 [2024-11-17 04:28:42.939059] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3968.542 ms, result 0 00:22:57.593 { 00:22:57.593 "name": "ftl0", 00:22:57.593 "uuid": "4e176443-4201-40c4-8cf1-bf8a3448302f" 00:22:57.593 } 00:22:57.593 04:28:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:57.593 04:28:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:57.593 04:28:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:57.593 04:28:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:57.593 04:28:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:57.852 /dev/nbd0 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:57.852 1+0 records in 00:22:57.852 1+0 records out 00:22:57.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382156 s, 10.7 MB/s 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:22:57.852 04:28:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:57.852 [2024-11-17 04:28:43.525084] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:22:57.852 [2024-11-17 04:28:43.525724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89072 ] 00:22:58.111 [2024-11-17 04:28:43.689732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.111 [2024-11-17 04:28:43.729993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.490  [2024-11-17T04:28:46.155Z] Copying: 184/1024 [MB] (184 MBps) [2024-11-17T04:28:47.093Z] Copying: 371/1024 [MB] (186 MBps) [2024-11-17T04:28:48.031Z] Copying: 606/1024 [MB] (235 MBps) [2024-11-17T04:28:48.596Z] Copying: 860/1024 [MB] (254 MBps) [2024-11-17T04:28:48.854Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:23:03.127 00:23:03.127 04:28:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:05.037 04:28:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:05.037 [2024-11-17 04:28:50.624576] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:05.037 [2024-11-17 04:28:50.625247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89152 ] 00:23:05.296 [2024-11-17 04:28:50.778768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.296 [2024-11-17 04:28:50.801316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.237  [2024-11-17T04:28:52.907Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-17T04:28:54.293Z] Copying: 29/1024 [MB] (15 MBps) [2024-11-17T04:28:54.864Z] Copying: 45/1024 [MB] (15 MBps) [2024-11-17T04:28:56.248Z] Copying: 60/1024 [MB] (14 MBps) [2024-11-17T04:28:57.188Z] Copying: 71/1024 [MB] (11 MBps) [2024-11-17T04:28:58.128Z] Copying: 85/1024 [MB] (14 MBps) [2024-11-17T04:28:59.072Z] Copying: 98/1024 [MB] (12 MBps) [2024-11-17T04:29:00.013Z] Copying: 112/1024 [MB] (13 MBps) [2024-11-17T04:29:00.955Z] Copying: 124/1024 [MB] (11 MBps) [2024-11-17T04:29:01.897Z] Copying: 146/1024 [MB] (22 MBps) [2024-11-17T04:29:03.284Z] Copying: 161/1024 [MB] (15 MBps) [2024-11-17T04:29:04.228Z] Copying: 177/1024 [MB] (15 MBps) [2024-11-17T04:29:05.171Z] Copying: 193/1024 [MB] (16 MBps) [2024-11-17T04:29:06.116Z] Copying: 210/1024 [MB] (16 MBps) [2024-11-17T04:29:07.062Z] Copying: 224/1024 [MB] (14 MBps) [2024-11-17T04:29:08.005Z] Copying: 241/1024 [MB] (16 MBps) [2024-11-17T04:29:08.948Z] Copying: 258/1024 [MB] (17 MBps) [2024-11-17T04:29:09.891Z] Copying: 274/1024 [MB] (15 MBps) [2024-11-17T04:29:11.297Z] Copying: 286/1024 [MB] (12 MBps) [2024-11-17T04:29:11.903Z] Copying: 302/1024 [MB] (16 MBps) [2024-11-17T04:29:13.290Z] Copying: 316/1024 [MB] (13 MBps) [2024-11-17T04:29:14.233Z] Copying: 330/1024 [MB] (13 MBps) [2024-11-17T04:29:15.177Z] Copying: 344/1024 [MB] (14 MBps) [2024-11-17T04:29:16.115Z] Copying: 356/1024 [MB] (12 MBps) [2024-11-17T04:29:17.055Z] Copying: 377/1024 [MB] (20 MBps) [2024-11-17T04:29:17.995Z] Copying: 392/1024 [MB] (15 MBps) [2024-11-17T04:29:18.935Z] Copying: 407/1024 [MB] (14 MBps) [2024-11-17T04:29:19.872Z] Copying: 424/1024 [MB] (16 MBps) [2024-11-17T04:29:21.253Z] Copying: 443/1024 [MB] (19 MBps) [2024-11-17T04:29:22.193Z] Copying: 460/1024 [MB] (16 MBps) [2024-11-17T04:29:23.132Z] Copying: 473/1024 [MB] (13 MBps) [2024-11-17T04:29:24.073Z] Copying: 494/1024 [MB] (20 MBps) [2024-11-17T04:29:25.016Z] Copying: 516/1024 [MB] (22 MBps) [2024-11-17T04:29:25.959Z] Copying: 534/1024 [MB] (17 MBps) [2024-11-17T04:29:26.901Z] Copying: 547/1024 [MB] (13 MBps) [2024-11-17T04:29:28.290Z] Copying: 562/1024 [MB] (15 MBps) [2024-11-17T04:29:29.227Z] Copying: 575/1024 [MB] (12 MBps) [2024-11-17T04:29:30.170Z] Copying: 591/1024 [MB] (16 MBps) [2024-11-17T04:29:31.114Z] Copying: 605/1024 [MB] (13 MBps) [2024-11-17T04:29:32.057Z] Copying: 622/1024 [MB] (17 MBps) [2024-11-17T04:29:32.997Z] Copying: 639/1024 [MB] (16 MBps) [2024-11-17T04:29:33.938Z] Copying: 657/1024 [MB] (18 MBps) [2024-11-17T04:29:34.876Z] Copying: 673/1024 [MB] (16 MBps) [2024-11-17T04:29:36.258Z] Copying: 699/1024 [MB] (25 MBps) [2024-11-17T04:29:37.202Z] Copying: 716/1024 [MB] (17 MBps) [2024-11-17T04:29:38.146Z] Copying: 734/1024 [MB] (17 MBps) [2024-11-17T04:29:39.089Z] Copying: 751/1024 [MB] (17 MBps) [2024-11-17T04:29:40.083Z] Copying: 768/1024 [MB] (17 MBps) [2024-11-17T04:29:41.047Z] Copying: 781/1024 [MB] (13 MBps) [2024-11-17T04:29:41.989Z] Copying: 809/1024 [MB] (27 MBps) [2024-11-17T04:29:42.927Z] Copying: 826/1024 [MB] (16 MBps) [2024-11-17T04:29:43.867Z] Copying: 844/1024 [MB] (17 MBps) [2024-11-17T04:29:45.246Z] Copying: 863/1024 [MB] (19 MBps) [2024-11-17T04:29:46.184Z] Copying: 887/1024 [MB] (23 MBps) [2024-11-17T04:29:47.127Z] Copying: 909/1024 [MB] (22 MBps) [2024-11-17T04:29:48.069Z] Copying: 929/1024 [MB] (20 MBps) [2024-11-17T04:29:49.008Z] Copying: 946/1024 [MB] (16 MBps) [2024-11-17T04:29:49.945Z] Copying: 970/1024 [MB] (24 MBps) [2024-11-17T04:29:50.883Z] Copying: 990/1024 [MB] (19 MBps) [2024-11-17T04:29:51.820Z] Copying: 1006/1024 [MB] (16 MBps) [2024-11-17T04:29:51.820Z] Copying: 1024/1024 [MB] (average 16 MBps) 00:24:06.093 00:24:06.093 04:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:06.093 04:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:06.354 04:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:06.615 [2024-11-17 04:29:52.150876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.615 [2024-11-17 04:29:52.150922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:06.615 [2024-11-17 04:29:52.150937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:06.615 [2024-11-17 04:29:52.150946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.615 [2024-11-17 04:29:52.150969] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:06.615 [2024-11-17 04:29:52.151420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.615 [2024-11-17 04:29:52.151445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:06.615 [2024-11-17 04:29:52.151454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:24:06.615 [2024-11-17 04:29:52.151464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.615 [2024-11-17 04:29:52.153853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.615 [2024-11-17 04:29:52.153892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:06.615 [2024-11-17 04:29:52.153903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.369 ms 00:24:06.615 [2024-11-17 04:29:52.153912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.615 [2024-11-17 04:29:52.172011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.615 [2024-11-17 04:29:52.172049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:06.615 [2024-11-17 04:29:52.172062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.082 ms 00:24:06.615 [2024-11-17 04:29:52.172072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.615 [2024-11-17 04:29:52.178355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.615 [2024-11-17 04:29:52.178396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:06.615 [2024-11-17 04:29:52.178406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.252 ms 00:24:06.615 [2024-11-17 04:29:52.178415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.615 [2024-11-17 04:29:52.180298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.615 [2024-11-17 04:29:52.180478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:06.616 [2024-11-17 04:29:52.180495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.819 ms 00:24:06.616 [2024-11-17 04:29:52.180504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.616 [2024-11-17 04:29:52.185840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.616 [2024-11-17 04:29:52.185895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:06.616 [2024-11-17 04:29:52.185907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.103 ms 00:24:06.616 [2024-11-17 04:29:52.185919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.616 [2024-11-17 04:29:52.186037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.616 [2024-11-17 04:29:52.186051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:06.616 [2024-11-17 04:29:52.186060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:24:06.616 [2024-11-17 04:29:52.186071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.616 [2024-11-17 04:29:52.188570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.616 [2024-11-17 04:29:52.188608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:06.616 [2024-11-17 04:29:52.188617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.482 ms 00:24:06.616 [2024-11-17 04:29:52.188626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.616 [2024-11-17 04:29:52.190948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.616 [2024-11-17 04:29:52.190990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:06.616 [2024-11-17 04:29:52.190999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.289 ms 00:24:06.616 [2024-11-17 04:29:52.191008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.616 [2024-11-17 04:29:52.192620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.616 [2024-11-17 04:29:52.192659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:06.616 [2024-11-17 04:29:52.192667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:24:06.616 [2024-11-17 04:29:52.192675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.616 [2024-11-17 04:29:52.194582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.616 [2024-11-17 04:29:52.194619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:06.616 [2024-11-17 04:29:52.194627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.852 ms 00:24:06.616 [2024-11-17 04:29:52.194635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.616 [2024-11-17 04:29:52.194665] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:06.616 [2024-11-17 04:29:52.194680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.194997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:06.616 [2024-11-17 04:29:52.195258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:06.617 [2024-11-17 04:29:52.195599] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:06.617 [2024-11-17 04:29:52.195608] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4e176443-4201-40c4-8cf1-bf8a3448302f 00:24:06.617 [2024-11-17 04:29:52.195618] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:06.617 [2024-11-17 04:29:52.195625] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:06.617 [2024-11-17 04:29:52.195639] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:06.617 [2024-11-17 04:29:52.195646] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:06.617 [2024-11-17 04:29:52.195656] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:06.617 [2024-11-17 04:29:52.195663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:06.617 [2024-11-17 04:29:52.195673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:06.617 [2024-11-17 04:29:52.195680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:06.617 [2024-11-17 04:29:52.195688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:06.617 [2024-11-17 04:29:52.195695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.617 [2024-11-17 04:29:52.195704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:06.617 [2024-11-17 04:29:52.195712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:24:06.617 [2024-11-17 04:29:52.195724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.617 [2024-11-17 04:29:52.197247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.617 [2024-11-17 04:29:52.197275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:06.617 [2024-11-17 04:29:52.197284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:24:06.617 [2024-11-17 04:29:52.197293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.617 [2024-11-17 04:29:52.197387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.617 [2024-11-17 04:29:52.197398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:06.617 [2024-11-17 04:29:52.197409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:06.617 [2024-11-17 04:29:52.197419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.617 [2024-11-17 04:29:52.202765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.617 [2024-11-17 04:29:52.202914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:06.617 [2024-11-17 04:29:52.202930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.617 [2024-11-17 04:29:52.202940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.617 [2024-11-17 04:29:52.202993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.617 [2024-11-17 04:29:52.203007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:06.617 [2024-11-17 04:29:52.203018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.617 [2024-11-17 04:29:52.203030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.617 [2024-11-17 04:29:52.203107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.617 [2024-11-17 04:29:52.203123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:06.617 [2024-11-17 04:29:52.203131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.617 [2024-11-17 04:29:52.203141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.617 [2024-11-17 04:29:52.203157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.617 [2024-11-17 04:29:52.203167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:06.617 [2024-11-17 04:29:52.203174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.617 [2024-11-17 04:29:52.203186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.617 [2024-11-17 04:29:52.212780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.617 [2024-11-17 04:29:52.212825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:06.617 [2024-11-17 04:29:52.212836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.617 [2024-11-17 04:29:52.212845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.617 [2024-11-17 04:29:52.220784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.617 [2024-11-17 04:29:52.220831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:06.617 [2024-11-17 04:29:52.220841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.617 [2024-11-17 04:29:52.220859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.617 [2024-11-17 04:29:52.220905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.617 [2024-11-17 04:29:52.220919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:06.617 [2024-11-17 04:29:52.220927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.617 [2024-11-17 04:29:52.220936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.617 [2024-11-17 04:29:52.221027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.617 [2024-11-17 04:29:52.221039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:06.617 [2024-11-17 04:29:52.221047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.617 [2024-11-17 04:29:52.221056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.617 [2024-11-17 04:29:52.221130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.617 [2024-11-17 04:29:52.221143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:06.617 [2024-11-17 04:29:52.221150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.617 [2024-11-17 04:29:52.221159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.617 [2024-11-17 04:29:52.221196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.617 [2024-11-17 04:29:52.221207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:06.617 [2024-11-17 04:29:52.221215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.617 [2024-11-17 04:29:52.221224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.617 [2024-11-17 04:29:52.221262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.617 [2024-11-17 04:29:52.221274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:06.617 [2024-11-17 04:29:52.221282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.617 [2024-11-17 04:29:52.221291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.617 [2024-11-17 04:29:52.221334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.617 [2024-11-17 04:29:52.221346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:06.617 [2024-11-17 04:29:52.221353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.617 [2024-11-17 04:29:52.221363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.618 [2024-11-17 04:29:52.221536] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.608 ms, result 0 00:24:06.618 true 00:24:06.618 04:29:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 88929 00:24:06.618 04:29:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid88929 00:24:06.618 04:29:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:06.618 [2024-11-17 04:29:52.306788] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:06.618 [2024-11-17 04:29:52.307029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89797 ] 00:24:06.878 [2024-11-17 04:29:52.460088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.878 [2024-11-17 04:29:52.481740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.817  [2024-11-17T04:29:54.919Z] Copying: 205/1024 [MB] (205 MBps) [2024-11-17T04:29:55.853Z] Copying: 465/1024 [MB] (259 MBps) [2024-11-17T04:29:56.787Z] Copying: 724/1024 [MB] (259 MBps) [2024-11-17T04:29:56.787Z] Copying: 975/1024 [MB] (251 MBps) [2024-11-17T04:29:57.046Z] Copying: 1024/1024 [MB] (average 244 MBps) 00:24:11.319 00:24:11.319 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 88929 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:11.319 04:29:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:11.319 [2024-11-17 04:29:56.939625] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:11.319 [2024-11-17 04:29:56.939757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89844 ] 00:24:11.577 [2024-11-17 04:29:57.094644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.577 [2024-11-17 04:29:57.112294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.577 [2024-11-17 04:29:57.194051] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.577 [2024-11-17 04:29:57.194254] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.577 [2024-11-17 04:29:57.255906] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:11.577 [2024-11-17 04:29:57.256204] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:11.577 [2024-11-17 04:29:57.256756] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:11.837 [2024-11-17 04:29:57.454490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.837 [2024-11-17 04:29:57.454521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:11.837 [2024-11-17 04:29:57.454531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:11.837 [2024-11-17 04:29:57.454540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.837 [2024-11-17 04:29:57.454576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.837 [2024-11-17 04:29:57.454585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:11.837 [2024-11-17 04:29:57.454593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:11.837 [2024-11-17 04:29:57.454599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.837 [2024-11-17 04:29:57.454616] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:11.837 [2024-11-17 04:29:57.454792] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:11.837 [2024-11-17 04:29:57.454805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.837 [2024-11-17 04:29:57.454811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:11.837 [2024-11-17 04:29:57.454817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:24:11.837 [2024-11-17 04:29:57.454825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.837 [2024-11-17 04:29:57.455759] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:11.837 [2024-11-17 04:29:57.457854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.837 [2024-11-17 04:29:57.457889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:11.837 [2024-11-17 04:29:57.457897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.097 ms 00:24:11.837 [2024-11-17 04:29:57.457904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.837 [2024-11-17 04:29:57.457944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.837 [2024-11-17 04:29:57.457952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:11.837 [2024-11-17 04:29:57.457960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:11.838 [2024-11-17 04:29:57.457966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.838 [2024-11-17 04:29:57.462344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.838 [2024-11-17 04:29:57.462369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:11.838 [2024-11-17 04:29:57.462385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.348 ms 00:24:11.838 [2024-11-17 04:29:57.462396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.838 [2024-11-17 04:29:57.462459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.838 [2024-11-17 04:29:57.462466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:11.838 [2024-11-17 04:29:57.462472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:11.838 [2024-11-17 04:29:57.462478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.838 [2024-11-17 04:29:57.462518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.838 [2024-11-17 04:29:57.462525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:11.838 [2024-11-17 04:29:57.462532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:11.838 [2024-11-17 04:29:57.462538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.838 [2024-11-17 04:29:57.462553] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:11.838 [2024-11-17 04:29:57.463697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.838 [2024-11-17 04:29:57.463721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:11.838 [2024-11-17 04:29:57.463727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:24:11.838 [2024-11-17 04:29:57.463737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.838 [2024-11-17 04:29:57.463759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.838 [2024-11-17 04:29:57.463765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:11.838 [2024-11-17 04:29:57.463771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:11.838 [2024-11-17 04:29:57.463777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.838 [2024-11-17 04:29:57.463791] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:11.838 [2024-11-17 04:29:57.463807] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:11.838 [2024-11-17 04:29:57.463833] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:11.838 [2024-11-17 04:29:57.463851] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:11.838 [2024-11-17 04:29:57.463930] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:11.838 [2024-11-17 04:29:57.463938] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:11.838 [2024-11-17 04:29:57.463946] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:11.838 [2024-11-17 04:29:57.463953] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:11.838 [2024-11-17 04:29:57.463962] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:11.838 [2024-11-17 04:29:57.463968] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:11.838 [2024-11-17 04:29:57.463974] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:11.838 [2024-11-17 04:29:57.463979] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:11.838 [2024-11-17 04:29:57.463987] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:11.838 [2024-11-17 04:29:57.463994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.838 [2024-11-17 04:29:57.464000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:11.838 [2024-11-17 04:29:57.464006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:24:11.838 [2024-11-17 04:29:57.464014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.838 [2024-11-17 04:29:57.464078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.838 [2024-11-17 04:29:57.464085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:11.838 [2024-11-17 04:29:57.464090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:11.838 [2024-11-17 04:29:57.464099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.838 [2024-11-17 04:29:57.464172] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:11.838 [2024-11-17 04:29:57.464185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:11.838 [2024-11-17 04:29:57.464192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.838 [2024-11-17 04:29:57.464197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.838 [2024-11-17 04:29:57.464203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:11.838 [2024-11-17 04:29:57.464208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:11.838 [2024-11-17 04:29:57.464213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:11.838 [2024-11-17 04:29:57.464218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:11.838 [2024-11-17 04:29:57.464224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:11.838 [2024-11-17 04:29:57.464229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.838 [2024-11-17 04:29:57.464234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:11.838 [2024-11-17 04:29:57.464239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:11.838 [2024-11-17 04:29:57.464248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.838 [2024-11-17 04:29:57.464255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:11.838 [2024-11-17 04:29:57.464260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:11.838 [2024-11-17 04:29:57.464265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.838 [2024-11-17 04:29:57.464270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:11.838 [2024-11-17 04:29:57.464276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:11.838 [2024-11-17 04:29:57.464281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.838 [2024-11-17 04:29:57.464286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:11.838 [2024-11-17 04:29:57.464291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:11.838 [2024-11-17 04:29:57.464297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.838 [2024-11-17 04:29:57.464302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:11.838 [2024-11-17 04:29:57.464307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:11.838 [2024-11-17 04:29:57.464312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.838 [2024-11-17 04:29:57.464317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:11.838 [2024-11-17 04:29:57.464322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:11.838 [2024-11-17 04:29:57.464328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.838 [2024-11-17 04:29:57.464336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:11.838 [2024-11-17 04:29:57.464342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:11.838 [2024-11-17 04:29:57.464347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.838 [2024-11-17 04:29:57.464352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:11.838 [2024-11-17 04:29:57.464357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:11.838 [2024-11-17 04:29:57.464362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.838 [2024-11-17 04:29:57.464366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:11.838 [2024-11-17 04:29:57.464371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:11.838 [2024-11-17 04:29:57.464389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.838 [2024-11-17 04:29:57.464394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:11.838 [2024-11-17 04:29:57.464399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:11.838 [2024-11-17 04:29:57.464405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.838 [2024-11-17 04:29:57.464410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:11.838 [2024-11-17 04:29:57.464415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:11.838 [2024-11-17 04:29:57.464419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.838 [2024-11-17 04:29:57.464424] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:11.838 [2024-11-17 04:29:57.464431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:11.838 [2024-11-17 04:29:57.464440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.838 [2024-11-17 04:29:57.464446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.838 [2024-11-17 04:29:57.464451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:11.838 [2024-11-17 04:29:57.464473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:11.838 [2024-11-17 04:29:57.464478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:11.838 [2024-11-17 04:29:57.464483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:11.838 [2024-11-17 04:29:57.464489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:11.838 [2024-11-17 04:29:57.464494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:11.838 [2024-11-17 04:29:57.464500] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:11.838 [2024-11-17 04:29:57.464507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.838 [2024-11-17 04:29:57.464513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:11.838 [2024-11-17 04:29:57.464521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:11.838 [2024-11-17 04:29:57.464526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:11.838 [2024-11-17 04:29:57.464531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:11.839 [2024-11-17 04:29:57.464537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:11.839 [2024-11-17 04:29:57.464544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:11.839 [2024-11-17 04:29:57.464550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:11.839 [2024-11-17 04:29:57.464555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:11.839 [2024-11-17 04:29:57.464561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:11.839 [2024-11-17 04:29:57.464567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:11.839 [2024-11-17 04:29:57.464572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:11.839 [2024-11-17 04:29:57.464577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:11.839 [2024-11-17 04:29:57.464582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:11.839 [2024-11-17 04:29:57.464588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:11.839 [2024-11-17 04:29:57.464593] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:11.839 [2024-11-17 04:29:57.464602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.839 [2024-11-17 04:29:57.464610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:11.839 [2024-11-17 04:29:57.464615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:11.839 [2024-11-17 04:29:57.464621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:11.839 [2024-11-17 04:29:57.464626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:11.839 [2024-11-17 04:29:57.464631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.464638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:11.839 [2024-11-17 04:29:57.464645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:24:11.839 [2024-11-17 04:29:57.464651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.472579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.472609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:11.839 [2024-11-17 04:29:57.472616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.889 ms 00:24:11.839 [2024-11-17 04:29:57.472622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.472682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.472691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:11.839 [2024-11-17 04:29:57.472701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:11.839 [2024-11-17 04:29:57.472706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.488251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.488285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:11.839 [2024-11-17 04:29:57.488294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.513 ms 00:24:11.839 [2024-11-17 04:29:57.488301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.488337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.488344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:11.839 [2024-11-17 04:29:57.488351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:11.839 [2024-11-17 04:29:57.488357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.488733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.488747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:11.839 [2024-11-17 04:29:57.488754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:24:11.839 [2024-11-17 04:29:57.488760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.488857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.488872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:11.839 [2024-11-17 04:29:57.488879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:24:11.839 [2024-11-17 04:29:57.488885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.494849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.494891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:11.839 [2024-11-17 04:29:57.494905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.944 ms 00:24:11.839 [2024-11-17 04:29:57.494917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.497759] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:11.839 [2024-11-17 04:29:57.497807] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:11.839 [2024-11-17 04:29:57.497827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.497839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:11.839 [2024-11-17 04:29:57.497851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.794 ms 00:24:11.839 [2024-11-17 04:29:57.497861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.509907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.509940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:11.839 [2024-11-17 04:29:57.509949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.997 ms 00:24:11.839 [2024-11-17 04:29:57.509955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.511518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.511628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:11.839 [2024-11-17 04:29:57.511640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:24:11.839 [2024-11-17 04:29:57.511645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.512953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.512980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:11.839 [2024-11-17 04:29:57.512988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:24:11.839 [2024-11-17 04:29:57.512994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.513232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.513249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:11.839 [2024-11-17 04:29:57.513259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:24:11.839 [2024-11-17 04:29:57.513264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.527429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.527466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:11.839 [2024-11-17 04:29:57.527475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.153 ms 00:24:11.839 [2024-11-17 04:29:57.527481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.533328] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:11.839 [2024-11-17 04:29:57.535330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.535475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:11.839 [2024-11-17 04:29:57.535487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.815 ms 00:24:11.839 [2024-11-17 04:29:57.535493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.535542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.535553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:11.839 [2024-11-17 04:29:57.535561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:11.839 [2024-11-17 04:29:57.535569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.535620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.535628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:11.839 [2024-11-17 04:29:57.535635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:11.839 [2024-11-17 04:29:57.535640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.535656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.535662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:11.839 [2024-11-17 04:29:57.535668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:11.839 [2024-11-17 04:29:57.535674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.535700] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:11.839 [2024-11-17 04:29:57.535708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.535716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:11.839 [2024-11-17 04:29:57.535722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:11.839 [2024-11-17 04:29:57.535728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.538537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.538640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:11.839 [2024-11-17 04:29:57.538652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.797 ms 00:24:11.839 [2024-11-17 04:29:57.538659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.839 [2024-11-17 04:29:57.538714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.839 [2024-11-17 04:29:57.538723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:11.839 [2024-11-17 04:29:57.538730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:11.839 [2024-11-17 04:29:57.538736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.840 [2024-11-17 04:29:57.539464] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 84.656 ms, result 0 00:24:13.214  [2024-11-17T04:29:59.875Z] Copying: 46/1024 [MB] (46 MBps) [2024-11-17T04:30:00.808Z] Copying: 79/1024 [MB] (33 MBps) [2024-11-17T04:30:01.741Z] Copying: 111/1024 [MB] (31 MBps) [2024-11-17T04:30:02.675Z] Copying: 137/1024 [MB] (26 MBps) [2024-11-17T04:30:03.611Z] Copying: 163/1024 [MB] (25 MBps) [2024-11-17T04:30:04.982Z] Copying: 189/1024 [MB] (25 MBps) [2024-11-17T04:30:05.551Z] Copying: 218/1024 [MB] (28 MBps) [2024-11-17T04:30:06.935Z] Copying: 265/1024 [MB] (47 MBps) [2024-11-17T04:30:07.877Z] Copying: 279/1024 [MB] (14 MBps) [2024-11-17T04:30:08.875Z] Copying: 293/1024 [MB] (13 MBps) [2024-11-17T04:30:09.816Z] Copying: 306/1024 [MB] (13 MBps) [2024-11-17T04:30:10.757Z] Copying: 319/1024 [MB] (12 MBps) [2024-11-17T04:30:11.697Z] Copying: 333/1024 [MB] (14 MBps) [2024-11-17T04:30:12.642Z] Copying: 367/1024 [MB] (34 MBps) [2024-11-17T04:30:13.588Z] Copying: 382/1024 [MB] (14 MBps) [2024-11-17T04:30:14.979Z] Copying: 400/1024 [MB] (18 MBps) [2024-11-17T04:30:15.552Z] Copying: 415/1024 [MB] (15 MBps) [2024-11-17T04:30:16.942Z] Copying: 432/1024 [MB] (16 MBps) [2024-11-17T04:30:17.888Z] Copying: 447/1024 [MB] (15 MBps) [2024-11-17T04:30:18.833Z] Copying: 460/1024 [MB] (12 MBps) [2024-11-17T04:30:19.774Z] Copying: 473/1024 [MB] (12 MBps) [2024-11-17T04:30:20.719Z] Copying: 490/1024 [MB] (17 MBps) [2024-11-17T04:30:21.662Z] Copying: 507/1024 [MB] (16 MBps) [2024-11-17T04:30:22.606Z] Copying: 523/1024 [MB] (16 MBps) [2024-11-17T04:30:23.549Z] Copying: 538/1024 [MB] (14 MBps) [2024-11-17T04:30:24.928Z] Copying: 551/1024 [MB] (13 MBps) [2024-11-17T04:30:25.869Z] Copying: 579/1024 [MB] (27 MBps) [2024-11-17T04:30:26.810Z] Copying: 609/1024 [MB] (30 MBps) [2024-11-17T04:30:27.750Z] Copying: 620/1024 [MB] (10 MBps) [2024-11-17T04:30:28.693Z] Copying: 655/1024 [MB] (34 MBps) [2024-11-17T04:30:29.637Z] Copying: 668/1024 [MB] (13 MBps) [2024-11-17T04:30:30.582Z] Copying: 680/1024 [MB] (11 MBps) [2024-11-17T04:30:31.968Z] Copying: 691/1024 [MB] (10 MBps) [2024-11-17T04:30:32.903Z] Copying: 704/1024 [MB] (13 MBps) [2024-11-17T04:30:33.836Z] Copying: 729/1024 [MB] (24 MBps) [2024-11-17T04:30:34.769Z] Copying: 761/1024 [MB] (31 MBps) [2024-11-17T04:30:35.710Z] Copying: 800/1024 [MB] (39 MBps) [2024-11-17T04:30:36.650Z] Copying: 832/1024 [MB] (31 MBps) [2024-11-17T04:30:37.658Z] Copying: 859/1024 [MB] (26 MBps) [2024-11-17T04:30:38.599Z] Copying: 875/1024 [MB] (15 MBps) [2024-11-17T04:30:39.980Z] Copying: 898/1024 [MB] (23 MBps) [2024-11-17T04:30:40.553Z] Copying: 922/1024 [MB] (24 MBps) [2024-11-17T04:30:41.941Z] Copying: 943/1024 [MB] (20 MBps) [2024-11-17T04:30:42.880Z] Copying: 963/1024 [MB] (20 MBps) [2024-11-17T04:30:43.823Z] Copying: 985/1024 [MB] (21 MBps) [2024-11-17T04:30:44.767Z] Copying: 1007/1024 [MB] (22 MBps) [2024-11-17T04:30:45.338Z] Copying: 1023/1024 [MB] (16 MBps) [2024-11-17T04:30:45.338Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-17 04:30:45.179643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.611 [2024-11-17 04:30:45.179728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:59.611 [2024-11-17 04:30:45.179746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:59.611 [2024-11-17 04:30:45.179764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.611 [2024-11-17 04:30:45.181354] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:59.611 [2024-11-17 04:30:45.185533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.611 [2024-11-17 04:30:45.185591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:59.611 [2024-11-17 04:30:45.185608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.115 ms 00:24:59.611 [2024-11-17 04:30:45.185618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.611 [2024-11-17 04:30:45.195528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.611 [2024-11-17 04:30:45.195575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:59.611 [2024-11-17 04:30:45.195586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.672 ms 00:24:59.611 [2024-11-17 04:30:45.195596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.611 [2024-11-17 04:30:45.217992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.611 [2024-11-17 04:30:45.218052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:59.611 [2024-11-17 04:30:45.218064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.378 ms 00:24:59.611 [2024-11-17 04:30:45.218073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.611 [2024-11-17 04:30:45.224297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.611 [2024-11-17 04:30:45.224336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:59.611 [2024-11-17 04:30:45.224349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.184 ms 00:24:59.611 [2024-11-17 04:30:45.224358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.611 [2024-11-17 04:30:45.227145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.611 [2024-11-17 04:30:45.227193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:59.611 [2024-11-17 04:30:45.227204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.702 ms 00:24:59.611 [2024-11-17 04:30:45.227211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.611 [2024-11-17 04:30:45.231449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.611 [2024-11-17 04:30:45.231509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:59.611 [2024-11-17 04:30:45.231519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.197 ms 00:24:59.611 [2024-11-17 04:30:45.231529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.611 [2024-11-17 04:30:45.295901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.611 [2024-11-17 04:30:45.295942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:59.611 [2024-11-17 04:30:45.295954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.317 ms 00:24:59.611 [2024-11-17 04:30:45.295962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.611 [2024-11-17 04:30:45.297696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.611 [2024-11-17 04:30:45.297859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:59.611 [2024-11-17 04:30:45.297876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 00:24:59.611 [2024-11-17 04:30:45.297884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.611 [2024-11-17 04:30:45.299154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.611 [2024-11-17 04:30:45.299195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:59.611 [2024-11-17 04:30:45.299204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:24:59.611 [2024-11-17 04:30:45.299211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.611 [2024-11-17 04:30:45.300307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.611 [2024-11-17 04:30:45.300345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:59.611 [2024-11-17 04:30:45.300355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:24:59.611 [2024-11-17 04:30:45.300362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.611 [2024-11-17 04:30:45.301636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.611 [2024-11-17 04:30:45.301673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:59.611 [2024-11-17 04:30:45.301682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.200 ms 00:24:59.611 [2024-11-17 04:30:45.301689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.611 [2024-11-17 04:30:45.301720] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:59.611 [2024-11-17 04:30:45.301734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104704 / 261120 wr_cnt: 1 state: open 00:24:59.611 [2024-11-17 04:30:45.301750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.301994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:59.612 [2024-11-17 04:30:45.302435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:59.613 [2024-11-17 04:30:45.302442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:59.613 [2024-11-17 04:30:45.302450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:59.613 [2024-11-17 04:30:45.302458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:59.613 [2024-11-17 04:30:45.302466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:59.613 [2024-11-17 04:30:45.302473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:59.613 [2024-11-17 04:30:45.302482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:59.613 [2024-11-17 04:30:45.302490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:59.613 [2024-11-17 04:30:45.302497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:59.613 [2024-11-17 04:30:45.302505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:59.613 [2024-11-17 04:30:45.302521] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:59.613 [2024-11-17 04:30:45.302533] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4e176443-4201-40c4-8cf1-bf8a3448302f 00:24:59.613 [2024-11-17 04:30:45.302541] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104704 00:24:59.613 [2024-11-17 04:30:45.302549] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105664 00:24:59.613 [2024-11-17 04:30:45.302556] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104704 00:24:59.613 [2024-11-17 04:30:45.302564] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:24:59.613 [2024-11-17 04:30:45.302571] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:59.613 [2024-11-17 04:30:45.302585] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:59.613 [2024-11-17 04:30:45.302597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:59.613 [2024-11-17 04:30:45.302604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:59.613 [2024-11-17 04:30:45.302611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:59.613 [2024-11-17 04:30:45.302618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.613 [2024-11-17 04:30:45.302625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:59.613 [2024-11-17 04:30:45.302634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:24:59.613 [2024-11-17 04:30:45.302644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.613 [2024-11-17 04:30:45.304415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.613 [2024-11-17 04:30:45.304435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:59.613 [2024-11-17 04:30:45.304447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.754 ms 00:24:59.613 [2024-11-17 04:30:45.304454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.613 [2024-11-17 04:30:45.304580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.613 [2024-11-17 04:30:45.304593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:59.613 [2024-11-17 04:30:45.304603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:24:59.613 [2024-11-17 04:30:45.304610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.613 [2024-11-17 04:30:45.310628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.613 [2024-11-17 04:30:45.310750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:59.613 [2024-11-17 04:30:45.310802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.613 [2024-11-17 04:30:45.310825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.613 [2024-11-17 04:30:45.310894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.613 [2024-11-17 04:30:45.310923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:59.613 [2024-11-17 04:30:45.310948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.613 [2024-11-17 04:30:45.310966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.613 [2024-11-17 04:30:45.311019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.613 [2024-11-17 04:30:45.311127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:59.613 [2024-11-17 04:30:45.311147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.613 [2024-11-17 04:30:45.311167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.613 [2024-11-17 04:30:45.311193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.613 [2024-11-17 04:30:45.311214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:59.613 [2024-11-17 04:30:45.311237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.613 [2024-11-17 04:30:45.311290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.613 [2024-11-17 04:30:45.322433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.613 [2024-11-17 04:30:45.322603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:59.613 [2024-11-17 04:30:45.322655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.613 [2024-11-17 04:30:45.322677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.613 [2024-11-17 04:30:45.331905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.613 [2024-11-17 04:30:45.332079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:59.613 [2024-11-17 04:30:45.332134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.613 [2024-11-17 04:30:45.332158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.613 [2024-11-17 04:30:45.332222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.613 [2024-11-17 04:30:45.332245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:59.613 [2024-11-17 04:30:45.332266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.613 [2024-11-17 04:30:45.332286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.613 [2024-11-17 04:30:45.332321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.613 [2024-11-17 04:30:45.332442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:59.613 [2024-11-17 04:30:45.332471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.613 [2024-11-17 04:30:45.332511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.613 [2024-11-17 04:30:45.332604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.613 [2024-11-17 04:30:45.332805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:59.613 [2024-11-17 04:30:45.332940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.613 [2024-11-17 04:30:45.332963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.613 [2024-11-17 04:30:45.333013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.613 [2024-11-17 04:30:45.333059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:59.613 [2024-11-17 04:30:45.333079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.613 [2024-11-17 04:30:45.333128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.613 [2024-11-17 04:30:45.333204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.613 [2024-11-17 04:30:45.333316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:59.613 [2024-11-17 04:30:45.333340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.613 [2024-11-17 04:30:45.333359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.613 [2024-11-17 04:30:45.333461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.613 [2024-11-17 04:30:45.333488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:59.613 [2024-11-17 04:30:45.333540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.613 [2024-11-17 04:30:45.333570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.613 [2024-11-17 04:30:45.333720] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 155.906 ms, result 0 00:25:00.995 00:25:00.995 00:25:00.995 04:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:02.906 04:30:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:03.166 [2024-11-17 04:30:48.639771] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:25:03.166 [2024-11-17 04:30:48.640108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90368 ] 00:25:03.166 [2024-11-17 04:30:48.803044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.166 [2024-11-17 04:30:48.831861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.428 [2024-11-17 04:30:48.943734] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:03.428 [2024-11-17 04:30:48.944069] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:03.428 [2024-11-17 04:30:49.105876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.428 [2024-11-17 04:30:49.106091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:03.428 [2024-11-17 04:30:49.106291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:03.428 [2024-11-17 04:30:49.106322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.428 [2024-11-17 04:30:49.106434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.428 [2024-11-17 04:30:49.106447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:03.428 [2024-11-17 04:30:49.106456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:25:03.428 [2024-11-17 04:30:49.106464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.428 [2024-11-17 04:30:49.106491] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:03.428 [2024-11-17 04:30:49.106775] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:03.428 [2024-11-17 04:30:49.106792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.428 [2024-11-17 04:30:49.106806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:03.428 [2024-11-17 04:30:49.106816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:25:03.428 [2024-11-17 04:30:49.106829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.428 [2024-11-17 04:30:49.108505] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:03.428 [2024-11-17 04:30:49.112215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.428 [2024-11-17 04:30:49.112404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:03.428 [2024-11-17 04:30:49.112425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.744 ms 00:25:03.428 [2024-11-17 04:30:49.112443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.428 [2024-11-17 04:30:49.112531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.428 [2024-11-17 04:30:49.112547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:03.428 [2024-11-17 04:30:49.112557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:03.428 [2024-11-17 04:30:49.112565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.428 [2024-11-17 04:30:49.120521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.428 [2024-11-17 04:30:49.120560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:03.428 [2024-11-17 04:30:49.120575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.909 ms 00:25:03.428 [2024-11-17 04:30:49.120583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.428 [2024-11-17 04:30:49.120690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.428 [2024-11-17 04:30:49.120704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:03.428 [2024-11-17 04:30:49.120712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:25:03.428 [2024-11-17 04:30:49.120720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.428 [2024-11-17 04:30:49.120778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.428 [2024-11-17 04:30:49.120788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:03.428 [2024-11-17 04:30:49.120802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:03.428 [2024-11-17 04:30:49.120810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.428 [2024-11-17 04:30:49.120836] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:03.428 [2024-11-17 04:30:49.122975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.428 [2024-11-17 04:30:49.123013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:03.428 [2024-11-17 04:30:49.123024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.144 ms 00:25:03.429 [2024-11-17 04:30:49.123038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.429 [2024-11-17 04:30:49.123077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.429 [2024-11-17 04:30:49.123091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:03.429 [2024-11-17 04:30:49.123100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:03.429 [2024-11-17 04:30:49.123108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.429 [2024-11-17 04:30:49.123135] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:03.429 [2024-11-17 04:30:49.123158] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:03.429 [2024-11-17 04:30:49.123196] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:03.429 [2024-11-17 04:30:49.123218] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:03.429 [2024-11-17 04:30:49.123325] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:03.429 [2024-11-17 04:30:49.123337] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:03.429 [2024-11-17 04:30:49.123349] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:03.429 [2024-11-17 04:30:49.123367] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:03.429 [2024-11-17 04:30:49.123407] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:03.429 [2024-11-17 04:30:49.123417] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:03.429 [2024-11-17 04:30:49.123424] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:03.429 [2024-11-17 04:30:49.123437] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:03.429 [2024-11-17 04:30:49.123444] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:03.429 [2024-11-17 04:30:49.123452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.429 [2024-11-17 04:30:49.123464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:03.429 [2024-11-17 04:30:49.123473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:25:03.429 [2024-11-17 04:30:49.123481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.429 [2024-11-17 04:30:49.123582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.429 [2024-11-17 04:30:49.123598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:03.429 [2024-11-17 04:30:49.123607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:25:03.429 [2024-11-17 04:30:49.123615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.429 [2024-11-17 04:30:49.123714] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:03.429 [2024-11-17 04:30:49.123739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:03.429 [2024-11-17 04:30:49.123748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:03.429 [2024-11-17 04:30:49.123761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.429 [2024-11-17 04:30:49.123769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:03.429 [2024-11-17 04:30:49.123781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:03.429 [2024-11-17 04:30:49.123789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:03.429 [2024-11-17 04:30:49.123796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:03.429 [2024-11-17 04:30:49.123803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:03.429 [2024-11-17 04:30:49.123811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:03.429 [2024-11-17 04:30:49.123818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:03.429 [2024-11-17 04:30:49.123825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:03.429 [2024-11-17 04:30:49.123833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:03.429 [2024-11-17 04:30:49.123839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:03.429 [2024-11-17 04:30:49.123846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:03.429 [2024-11-17 04:30:49.123853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.429 [2024-11-17 04:30:49.123860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:03.429 [2024-11-17 04:30:49.123869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:03.429 [2024-11-17 04:30:49.123875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.429 [2024-11-17 04:30:49.123882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:03.429 [2024-11-17 04:30:49.123889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:03.429 [2024-11-17 04:30:49.123898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.429 [2024-11-17 04:30:49.123905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:03.429 [2024-11-17 04:30:49.123912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:03.429 [2024-11-17 04:30:49.123918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.429 [2024-11-17 04:30:49.123925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:03.429 [2024-11-17 04:30:49.123932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:03.429 [2024-11-17 04:30:49.123939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.429 [2024-11-17 04:30:49.123945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:03.429 [2024-11-17 04:30:49.123952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:03.429 [2024-11-17 04:30:49.123958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.429 [2024-11-17 04:30:49.123964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:03.429 [2024-11-17 04:30:49.123971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:03.429 [2024-11-17 04:30:49.123980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:03.429 [2024-11-17 04:30:49.123987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:03.429 [2024-11-17 04:30:49.123993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:03.429 [2024-11-17 04:30:49.124000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:03.429 [2024-11-17 04:30:49.124007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:03.429 [2024-11-17 04:30:49.124013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:03.429 [2024-11-17 04:30:49.124019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.429 [2024-11-17 04:30:49.124026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:03.429 [2024-11-17 04:30:49.124032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:03.429 [2024-11-17 04:30:49.124038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.429 [2024-11-17 04:30:49.124045] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:03.429 [2024-11-17 04:30:49.124055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:03.429 [2024-11-17 04:30:49.124068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:03.429 [2024-11-17 04:30:49.124079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.429 [2024-11-17 04:30:49.124086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:03.429 [2024-11-17 04:30:49.124093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:03.429 [2024-11-17 04:30:49.124102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:03.429 [2024-11-17 04:30:49.124109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:03.429 [2024-11-17 04:30:49.124115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:03.429 [2024-11-17 04:30:49.124122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:03.429 [2024-11-17 04:30:49.124133] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:03.429 [2024-11-17 04:30:49.124144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:03.429 [2024-11-17 04:30:49.124157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:03.429 [2024-11-17 04:30:49.124165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:03.429 [2024-11-17 04:30:49.124173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:03.429 [2024-11-17 04:30:49.124180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:03.429 [2024-11-17 04:30:49.124187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:03.429 [2024-11-17 04:30:49.124195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:03.429 [2024-11-17 04:30:49.124203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:03.429 [2024-11-17 04:30:49.124209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:03.429 [2024-11-17 04:30:49.124216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:03.429 [2024-11-17 04:30:49.124223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:03.429 [2024-11-17 04:30:49.124232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:03.429 [2024-11-17 04:30:49.124240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:03.429 [2024-11-17 04:30:49.124247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:03.429 [2024-11-17 04:30:49.124254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:03.429 [2024-11-17 04:30:49.124261] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:03.429 [2024-11-17 04:30:49.124270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:03.429 [2024-11-17 04:30:49.124277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:03.430 [2024-11-17 04:30:49.124284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:03.430 [2024-11-17 04:30:49.124292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:03.430 [2024-11-17 04:30:49.124299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:03.430 [2024-11-17 04:30:49.124306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.430 [2024-11-17 04:30:49.124314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:03.430 [2024-11-17 04:30:49.124321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:25:03.430 [2024-11-17 04:30:49.124329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.430 [2024-11-17 04:30:49.138219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.430 [2024-11-17 04:30:49.138270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:03.430 [2024-11-17 04:30:49.138282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.824 ms 00:25:03.430 [2024-11-17 04:30:49.138290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.430 [2024-11-17 04:30:49.138398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.430 [2024-11-17 04:30:49.138409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:03.430 [2024-11-17 04:30:49.138418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:03.430 [2024-11-17 04:30:49.138433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.162441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.162715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:03.690 [2024-11-17 04:30:49.162750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.946 ms 00:25:03.690 [2024-11-17 04:30:49.162766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.162846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.162865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:03.690 [2024-11-17 04:30:49.162882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:03.690 [2024-11-17 04:30:49.162896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.163596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.163640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:03.690 [2024-11-17 04:30:49.163661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:25:03.690 [2024-11-17 04:30:49.163677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.163923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.163963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:03.690 [2024-11-17 04:30:49.163981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:25:03.690 [2024-11-17 04:30:49.163995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.173107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.173160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:03.690 [2024-11-17 04:30:49.173177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.069 ms 00:25:03.690 [2024-11-17 04:30:49.173185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.177082] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:03.690 [2024-11-17 04:30:49.177255] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:03.690 [2024-11-17 04:30:49.177280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.177289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:03.690 [2024-11-17 04:30:49.177298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.984 ms 00:25:03.690 [2024-11-17 04:30:49.177305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.193223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.193279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:03.690 [2024-11-17 04:30:49.193291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.871 ms 00:25:03.690 [2024-11-17 04:30:49.193300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.196192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.196352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:03.690 [2024-11-17 04:30:49.196370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.834 ms 00:25:03.690 [2024-11-17 04:30:49.196397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.199162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.199210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:03.690 [2024-11-17 04:30:49.199219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.568 ms 00:25:03.690 [2024-11-17 04:30:49.199227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.199697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.199788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:03.690 [2024-11-17 04:30:49.199850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:25:03.690 [2024-11-17 04:30:49.199861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.223282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.223498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:03.690 [2024-11-17 04:30:49.223519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.377 ms 00:25:03.690 [2024-11-17 04:30:49.223529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.231958] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:03.690 [2024-11-17 04:30:49.235338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.235539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:03.690 [2024-11-17 04:30:49.235560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.755 ms 00:25:03.690 [2024-11-17 04:30:49.235569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.235658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.235670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:03.690 [2024-11-17 04:30:49.235679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:03.690 [2024-11-17 04:30:49.235693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.237656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.237701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:03.690 [2024-11-17 04:30:49.237722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.924 ms 00:25:03.690 [2024-11-17 04:30:49.237730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.237759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.237768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:03.690 [2024-11-17 04:30:49.237777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:03.690 [2024-11-17 04:30:49.237784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.237824] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:03.690 [2024-11-17 04:30:49.237835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.237844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:03.690 [2024-11-17 04:30:49.237853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:03.690 [2024-11-17 04:30:49.237864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.243773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.243834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:03.690 [2024-11-17 04:30:49.243846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.890 ms 00:25:03.690 [2024-11-17 04:30:49.243854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.243941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.690 [2024-11-17 04:30:49.243951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:03.690 [2024-11-17 04:30:49.243960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:03.690 [2024-11-17 04:30:49.243969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.690 [2024-11-17 04:30:49.245180] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 138.849 ms, result 0 00:25:05.073  [2024-11-17T04:30:51.743Z] Copying: 1104/1048576 [kB] (1104 kBps) [2024-11-17T04:30:52.687Z] Copying: 4380/1048576 [kB] (3276 kBps) [2024-11-17T04:30:53.634Z] Copying: 14/1024 [MB] (10 MBps) [2024-11-17T04:30:54.579Z] Copying: 30/1024 [MB] (15 MBps) [2024-11-17T04:30:55.525Z] Copying: 46/1024 [MB] (15 MBps) [2024-11-17T04:30:56.470Z] Copying: 62/1024 [MB] (16 MBps) [2024-11-17T04:30:57.858Z] Copying: 79/1024 [MB] (16 MBps) [2024-11-17T04:30:58.432Z] Copying: 95/1024 [MB] (16 MBps) [2024-11-17T04:30:59.817Z] Copying: 112/1024 [MB] (16 MBps) [2024-11-17T04:31:00.760Z] Copying: 128/1024 [MB] (16 MBps) [2024-11-17T04:31:01.702Z] Copying: 145/1024 [MB] (16 MBps) [2024-11-17T04:31:02.644Z] Copying: 164/1024 [MB] (19 MBps) [2024-11-17T04:31:03.588Z] Copying: 194/1024 [MB] (29 MBps) [2024-11-17T04:31:04.529Z] Copying: 216/1024 [MB] (22 MBps) [2024-11-17T04:31:05.471Z] Copying: 244/1024 [MB] (28 MBps) [2024-11-17T04:31:06.452Z] Copying: 270/1024 [MB] (26 MBps) [2024-11-17T04:31:07.850Z] Copying: 294/1024 [MB] (23 MBps) [2024-11-17T04:31:08.796Z] Copying: 326/1024 [MB] (32 MBps) [2024-11-17T04:31:09.742Z] Copying: 354/1024 [MB] (28 MBps) [2024-11-17T04:31:10.684Z] Copying: 381/1024 [MB] (26 MBps) [2024-11-17T04:31:11.625Z] Copying: 409/1024 [MB] (27 MBps) [2024-11-17T04:31:12.567Z] Copying: 437/1024 [MB] (28 MBps) [2024-11-17T04:31:13.512Z] Copying: 465/1024 [MB] (27 MBps) [2024-11-17T04:31:14.453Z] Copying: 495/1024 [MB] (29 MBps) [2024-11-17T04:31:15.836Z] Copying: 525/1024 [MB] (29 MBps) [2024-11-17T04:31:16.778Z] Copying: 554/1024 [MB] (29 MBps) [2024-11-17T04:31:17.723Z] Copying: 584/1024 [MB] (30 MBps) [2024-11-17T04:31:18.666Z] Copying: 614/1024 [MB] (29 MBps) [2024-11-17T04:31:19.611Z] Copying: 645/1024 [MB] (30 MBps) [2024-11-17T04:31:20.651Z] Copying: 677/1024 [MB] (32 MBps) [2024-11-17T04:31:21.622Z] Copying: 705/1024 [MB] (27 MBps) [2024-11-17T04:31:22.567Z] Copying: 729/1024 [MB] (24 MBps) [2024-11-17T04:31:23.511Z] Copying: 754/1024 [MB] (24 MBps) [2024-11-17T04:31:24.459Z] Copying: 784/1024 [MB] (29 MBps) [2024-11-17T04:31:25.847Z] Copying: 813/1024 [MB] (29 MBps) [2024-11-17T04:31:26.788Z] Copying: 829/1024 [MB] (15 MBps) [2024-11-17T04:31:27.731Z] Copying: 845/1024 [MB] (15 MBps) [2024-11-17T04:31:28.674Z] Copying: 861/1024 [MB] (16 MBps) [2024-11-17T04:31:29.620Z] Copying: 877/1024 [MB] (15 MBps) [2024-11-17T04:31:30.567Z] Copying: 894/1024 [MB] (16 MBps) [2024-11-17T04:31:31.511Z] Copying: 910/1024 [MB] (16 MBps) [2024-11-17T04:31:32.455Z] Copying: 925/1024 [MB] (15 MBps) [2024-11-17T04:31:33.844Z] Copying: 941/1024 [MB] (16 MBps) [2024-11-17T04:31:34.788Z] Copying: 958/1024 [MB] (16 MBps) [2024-11-17T04:31:35.733Z] Copying: 974/1024 [MB] (16 MBps) [2024-11-17T04:31:36.679Z] Copying: 991/1024 [MB] (16 MBps) [2024-11-17T04:31:37.623Z] Copying: 1007/1024 [MB] (16 MBps) [2024-11-17T04:31:37.623Z] Copying: 1022/1024 [MB] (15 MBps) [2024-11-17T04:31:39.011Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-17 04:31:38.581292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.284 [2024-11-17 04:31:38.581392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:53.284 [2024-11-17 04:31:38.581409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:53.284 [2024-11-17 04:31:38.581420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.284 [2024-11-17 04:31:38.581447] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:53.284 [2024-11-17 04:31:38.582331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.284 [2024-11-17 04:31:38.582408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:53.284 [2024-11-17 04:31:38.582422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:25:53.284 [2024-11-17 04:31:38.582439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.284 [2024-11-17 04:31:38.582690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.284 [2024-11-17 04:31:38.582702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:53.284 [2024-11-17 04:31:38.582711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:25:53.284 [2024-11-17 04:31:38.582719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.284 [2024-11-17 04:31:38.597242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.284 [2024-11-17 04:31:38.597312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:53.285 [2024-11-17 04:31:38.597327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.500 ms 00:25:53.285 [2024-11-17 04:31:38.597346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.604214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.285 [2024-11-17 04:31:38.604460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:53.285 [2024-11-17 04:31:38.604483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.376 ms 00:25:53.285 [2024-11-17 04:31:38.604508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.607869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.285 [2024-11-17 04:31:38.608063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:53.285 [2024-11-17 04:31:38.608083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.298 ms 00:25:53.285 [2024-11-17 04:31:38.608092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.612770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.285 [2024-11-17 04:31:38.612829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:53.285 [2024-11-17 04:31:38.612851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.556 ms 00:25:53.285 [2024-11-17 04:31:38.612860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.617106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.285 [2024-11-17 04:31:38.617176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:53.285 [2024-11-17 04:31:38.617194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.187 ms 00:25:53.285 [2024-11-17 04:31:38.617208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.620716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.285 [2024-11-17 04:31:38.620787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:53.285 [2024-11-17 04:31:38.620797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.479 ms 00:25:53.285 [2024-11-17 04:31:38.620805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.624015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.285 [2024-11-17 04:31:38.624070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:53.285 [2024-11-17 04:31:38.624081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.161 ms 00:25:53.285 [2024-11-17 04:31:38.624089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.626360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.285 [2024-11-17 04:31:38.626427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:53.285 [2024-11-17 04:31:38.626437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.224 ms 00:25:53.285 [2024-11-17 04:31:38.626444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.628572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.285 [2024-11-17 04:31:38.628626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:53.285 [2024-11-17 04:31:38.628637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.050 ms 00:25:53.285 [2024-11-17 04:31:38.628645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.628690] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:53.285 [2024-11-17 04:31:38.628718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:53.285 [2024-11-17 04:31:38.628733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:53.285 [2024-11-17 04:31:38.628742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.628992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:53.285 [2024-11-17 04:31:38.629621] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:53.285 [2024-11-17 04:31:38.629631] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4e176443-4201-40c4-8cf1-bf8a3448302f 00:25:53.285 [2024-11-17 04:31:38.629652] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:53.285 [2024-11-17 04:31:38.629660] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 159936 00:25:53.285 [2024-11-17 04:31:38.629668] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 157952 00:25:53.285 [2024-11-17 04:31:38.629686] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0126 00:25:53.285 [2024-11-17 04:31:38.629694] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:53.285 [2024-11-17 04:31:38.629703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:53.285 [2024-11-17 04:31:38.629711] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:53.285 [2024-11-17 04:31:38.629719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:53.285 [2024-11-17 04:31:38.629726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:53.285 [2024-11-17 04:31:38.629733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.285 [2024-11-17 04:31:38.629747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:53.285 [2024-11-17 04:31:38.629755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:25:53.285 [2024-11-17 04:31:38.629768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.632356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.285 [2024-11-17 04:31:38.632409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:53.285 [2024-11-17 04:31:38.632420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.568 ms 00:25:53.285 [2024-11-17 04:31:38.632428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.632579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.285 [2024-11-17 04:31:38.632589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:53.285 [2024-11-17 04:31:38.632607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:25:53.285 [2024-11-17 04:31:38.632616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.640926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.285 [2024-11-17 04:31:38.640990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:53.285 [2024-11-17 04:31:38.641001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.285 [2024-11-17 04:31:38.641010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.641076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.285 [2024-11-17 04:31:38.641086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:53.285 [2024-11-17 04:31:38.641100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.285 [2024-11-17 04:31:38.641109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.641178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.285 [2024-11-17 04:31:38.641189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:53.285 [2024-11-17 04:31:38.641197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.285 [2024-11-17 04:31:38.641205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.641221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.285 [2024-11-17 04:31:38.641229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:53.285 [2024-11-17 04:31:38.641237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.285 [2024-11-17 04:31:38.641246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.655478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.285 [2024-11-17 04:31:38.655530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:53.285 [2024-11-17 04:31:38.655541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.285 [2024-11-17 04:31:38.655550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.665911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.285 [2024-11-17 04:31:38.666097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:53.285 [2024-11-17 04:31:38.666115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.285 [2024-11-17 04:31:38.666132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.285 [2024-11-17 04:31:38.666183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.285 [2024-11-17 04:31:38.666193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:53.285 [2024-11-17 04:31:38.666201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.286 [2024-11-17 04:31:38.666209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.286 [2024-11-17 04:31:38.666245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.286 [2024-11-17 04:31:38.666255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:53.286 [2024-11-17 04:31:38.666263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.286 [2024-11-17 04:31:38.666272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.286 [2024-11-17 04:31:38.666352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.286 [2024-11-17 04:31:38.666362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:53.286 [2024-11-17 04:31:38.666392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.286 [2024-11-17 04:31:38.666401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.286 [2024-11-17 04:31:38.666437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.286 [2024-11-17 04:31:38.666447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:53.286 [2024-11-17 04:31:38.666456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.286 [2024-11-17 04:31:38.666467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.286 [2024-11-17 04:31:38.666512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.286 [2024-11-17 04:31:38.666522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:53.286 [2024-11-17 04:31:38.666530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.286 [2024-11-17 04:31:38.666538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.286 [2024-11-17 04:31:38.666584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.286 [2024-11-17 04:31:38.666595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:53.286 [2024-11-17 04:31:38.666603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.286 [2024-11-17 04:31:38.666612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.286 [2024-11-17 04:31:38.666744] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.415 ms, result 0 00:25:53.286 00:25:53.286 00:25:53.286 04:31:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:55.832 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:55.832 04:31:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:55.832 [2024-11-17 04:31:41.164308] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:25:55.832 [2024-11-17 04:31:41.164733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90903 ] 00:25:55.832 [2024-11-17 04:31:41.326649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.832 [2024-11-17 04:31:41.355699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.832 [2024-11-17 04:31:41.467501] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:55.832 [2024-11-17 04:31:41.467778] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:56.094 [2024-11-17 04:31:41.629854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.094 [2024-11-17 04:31:41.629920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:56.094 [2024-11-17 04:31:41.629935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:56.094 [2024-11-17 04:31:41.629944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.094 [2024-11-17 04:31:41.630003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.094 [2024-11-17 04:31:41.630014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:56.095 [2024-11-17 04:31:41.630023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:56.095 [2024-11-17 04:31:41.630031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.095 [2024-11-17 04:31:41.630059] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:56.095 [2024-11-17 04:31:41.630346] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:56.095 [2024-11-17 04:31:41.630364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.095 [2024-11-17 04:31:41.630396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:56.095 [2024-11-17 04:31:41.630407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:25:56.095 [2024-11-17 04:31:41.630418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.095 [2024-11-17 04:31:41.632162] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:56.095 [2024-11-17 04:31:41.636410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.095 [2024-11-17 04:31:41.636469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:56.095 [2024-11-17 04:31:41.636485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.249 ms 00:25:56.095 [2024-11-17 04:31:41.636516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.095 [2024-11-17 04:31:41.636599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.095 [2024-11-17 04:31:41.636610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:56.095 [2024-11-17 04:31:41.636619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:56.095 [2024-11-17 04:31:41.636627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.095 [2024-11-17 04:31:41.645302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.095 [2024-11-17 04:31:41.645352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:56.095 [2024-11-17 04:31:41.645366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.629 ms 00:25:56.095 [2024-11-17 04:31:41.645399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.095 [2024-11-17 04:31:41.645509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.095 [2024-11-17 04:31:41.645543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:56.095 [2024-11-17 04:31:41.645555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:25:56.095 [2024-11-17 04:31:41.645564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.095 [2024-11-17 04:31:41.645623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.095 [2024-11-17 04:31:41.645634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:56.095 [2024-11-17 04:31:41.645642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:56.095 [2024-11-17 04:31:41.645654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.095 [2024-11-17 04:31:41.645678] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:56.095 [2024-11-17 04:31:41.647768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.095 [2024-11-17 04:31:41.647805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:56.095 [2024-11-17 04:31:41.647824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.096 ms 00:25:56.095 [2024-11-17 04:31:41.647835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.095 [2024-11-17 04:31:41.647871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.095 [2024-11-17 04:31:41.647880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:56.095 [2024-11-17 04:31:41.647892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:56.095 [2024-11-17 04:31:41.647902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.095 [2024-11-17 04:31:41.647926] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:56.095 [2024-11-17 04:31:41.647947] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:56.095 [2024-11-17 04:31:41.647986] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:56.095 [2024-11-17 04:31:41.648006] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:56.095 [2024-11-17 04:31:41.648112] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:56.095 [2024-11-17 04:31:41.648123] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:56.095 [2024-11-17 04:31:41.648139] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:56.095 [2024-11-17 04:31:41.648149] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:56.095 [2024-11-17 04:31:41.648159] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:56.095 [2024-11-17 04:31:41.648167] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:56.095 [2024-11-17 04:31:41.648176] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:56.095 [2024-11-17 04:31:41.648188] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:56.095 [2024-11-17 04:31:41.648196] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:56.095 [2024-11-17 04:31:41.648205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.095 [2024-11-17 04:31:41.648217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:56.095 [2024-11-17 04:31:41.648225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:25:56.095 [2024-11-17 04:31:41.648232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.095 [2024-11-17 04:31:41.648318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.095 [2024-11-17 04:31:41.648327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:56.095 [2024-11-17 04:31:41.648334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:56.095 [2024-11-17 04:31:41.648345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.095 [2024-11-17 04:31:41.648462] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:56.095 [2024-11-17 04:31:41.648475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:56.095 [2024-11-17 04:31:41.648490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.095 [2024-11-17 04:31:41.648512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.095 [2024-11-17 04:31:41.648521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:56.095 [2024-11-17 04:31:41.648536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:56.095 [2024-11-17 04:31:41.648545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:56.095 [2024-11-17 04:31:41.648554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:56.095 [2024-11-17 04:31:41.648563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:56.095 [2024-11-17 04:31:41.648574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.095 [2024-11-17 04:31:41.648582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:56.095 [2024-11-17 04:31:41.648591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:56.095 [2024-11-17 04:31:41.648598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.095 [2024-11-17 04:31:41.648607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:56.095 [2024-11-17 04:31:41.648615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:56.095 [2024-11-17 04:31:41.648623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.095 [2024-11-17 04:31:41.648633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:56.095 [2024-11-17 04:31:41.648641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:56.095 [2024-11-17 04:31:41.648649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.095 [2024-11-17 04:31:41.648658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:56.095 [2024-11-17 04:31:41.648666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:56.095 [2024-11-17 04:31:41.648674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.095 [2024-11-17 04:31:41.648683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:56.095 [2024-11-17 04:31:41.648690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:56.095 [2024-11-17 04:31:41.648698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.095 [2024-11-17 04:31:41.648714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:56.095 [2024-11-17 04:31:41.648722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:56.095 [2024-11-17 04:31:41.648730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.095 [2024-11-17 04:31:41.648738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:56.095 [2024-11-17 04:31:41.648745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:56.095 [2024-11-17 04:31:41.648757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.095 [2024-11-17 04:31:41.648765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:56.095 [2024-11-17 04:31:41.648773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:56.095 [2024-11-17 04:31:41.648780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.095 [2024-11-17 04:31:41.648788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:56.095 [2024-11-17 04:31:41.648796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:56.095 [2024-11-17 04:31:41.648803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.095 [2024-11-17 04:31:41.648811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:56.095 [2024-11-17 04:31:41.648817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:56.095 [2024-11-17 04:31:41.648823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.095 [2024-11-17 04:31:41.648830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:56.095 [2024-11-17 04:31:41.648840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:56.095 [2024-11-17 04:31:41.648847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.096 [2024-11-17 04:31:41.648854] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:56.096 [2024-11-17 04:31:41.648864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:56.096 [2024-11-17 04:31:41.648871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.096 [2024-11-17 04:31:41.648879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.096 [2024-11-17 04:31:41.648887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:56.096 [2024-11-17 04:31:41.648895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:56.096 [2024-11-17 04:31:41.648902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:56.096 [2024-11-17 04:31:41.648909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:56.096 [2024-11-17 04:31:41.648915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:56.096 [2024-11-17 04:31:41.648923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:56.096 [2024-11-17 04:31:41.648932] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:56.096 [2024-11-17 04:31:41.648941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.096 [2024-11-17 04:31:41.648950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:56.096 [2024-11-17 04:31:41.648957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:56.096 [2024-11-17 04:31:41.648966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:56.096 [2024-11-17 04:31:41.648974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:56.096 [2024-11-17 04:31:41.648981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:56.096 [2024-11-17 04:31:41.648988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:56.096 [2024-11-17 04:31:41.648995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:56.096 [2024-11-17 04:31:41.649002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:56.096 [2024-11-17 04:31:41.649009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:56.096 [2024-11-17 04:31:41.649017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:56.096 [2024-11-17 04:31:41.649024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:56.096 [2024-11-17 04:31:41.649031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:56.096 [2024-11-17 04:31:41.649038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:56.096 [2024-11-17 04:31:41.649045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:56.096 [2024-11-17 04:31:41.649053] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:56.096 [2024-11-17 04:31:41.649061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.096 [2024-11-17 04:31:41.649070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:56.096 [2024-11-17 04:31:41.649078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:56.096 [2024-11-17 04:31:41.649087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:56.096 [2024-11-17 04:31:41.649095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:56.096 [2024-11-17 04:31:41.649102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.649110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:56.096 [2024-11-17 04:31:41.649118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:25:56.096 [2024-11-17 04:31:41.649127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.663047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.663095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:56.096 [2024-11-17 04:31:41.663106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.874 ms 00:25:56.096 [2024-11-17 04:31:41.663114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.663197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.663207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:56.096 [2024-11-17 04:31:41.663216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:56.096 [2024-11-17 04:31:41.663223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.686194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.686275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:56.096 [2024-11-17 04:31:41.686298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.912 ms 00:25:56.096 [2024-11-17 04:31:41.686314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.686418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.686439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:56.096 [2024-11-17 04:31:41.686462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:56.096 [2024-11-17 04:31:41.686484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.687132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.687195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:56.096 [2024-11-17 04:31:41.687216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:25:56.096 [2024-11-17 04:31:41.687232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.687502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.687522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:56.096 [2024-11-17 04:31:41.687544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:25:56.096 [2024-11-17 04:31:41.687565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.696305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.696352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:56.096 [2024-11-17 04:31:41.696374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.705 ms 00:25:56.096 [2024-11-17 04:31:41.696400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.700143] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:56.096 [2024-11-17 04:31:41.700194] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:56.096 [2024-11-17 04:31:41.700210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.700218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:56.096 [2024-11-17 04:31:41.700227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.685 ms 00:25:56.096 [2024-11-17 04:31:41.700234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.716033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.716087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:56.096 [2024-11-17 04:31:41.716111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.745 ms 00:25:56.096 [2024-11-17 04:31:41.716124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.719318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.719369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:56.096 [2024-11-17 04:31:41.719399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.134 ms 00:25:56.096 [2024-11-17 04:31:41.719406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.722543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.722595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:56.096 [2024-11-17 04:31:41.722616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.087 ms 00:25:56.096 [2024-11-17 04:31:41.722624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.723003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.723017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:56.096 [2024-11-17 04:31:41.723026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:25:56.096 [2024-11-17 04:31:41.723038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.747179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.747242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:56.096 [2024-11-17 04:31:41.747257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.120 ms 00:25:56.096 [2024-11-17 04:31:41.747266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.755366] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:56.096 [2024-11-17 04:31:41.758639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.758689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:56.096 [2024-11-17 04:31:41.758708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.316 ms 00:25:56.096 [2024-11-17 04:31:41.758717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.758797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.096 [2024-11-17 04:31:41.758812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:56.096 [2024-11-17 04:31:41.758822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:56.096 [2024-11-17 04:31:41.758829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.096 [2024-11-17 04:31:41.759632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.097 [2024-11-17 04:31:41.759670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:56.097 [2024-11-17 04:31:41.759681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:25:56.097 [2024-11-17 04:31:41.759689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.097 [2024-11-17 04:31:41.759716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.097 [2024-11-17 04:31:41.759725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:56.097 [2024-11-17 04:31:41.759733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:56.097 [2024-11-17 04:31:41.759741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.097 [2024-11-17 04:31:41.759779] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:56.097 [2024-11-17 04:31:41.759789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.097 [2024-11-17 04:31:41.759800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:56.097 [2024-11-17 04:31:41.759812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:56.097 [2024-11-17 04:31:41.759819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.097 [2024-11-17 04:31:41.765271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.097 [2024-11-17 04:31:41.765319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:56.097 [2024-11-17 04:31:41.765331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.433 ms 00:25:56.097 [2024-11-17 04:31:41.765338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.097 [2024-11-17 04:31:41.765443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.097 [2024-11-17 04:31:41.765454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:56.097 [2024-11-17 04:31:41.765465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:56.097 [2024-11-17 04:31:41.765475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.097 [2024-11-17 04:31:41.767045] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.656 ms, result 0 00:25:57.483  [2024-11-17T04:31:44.156Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-17T04:31:45.099Z] Copying: 33/1024 [MB] (13 MBps) [2024-11-17T04:31:46.042Z] Copying: 55/1024 [MB] (22 MBps) [2024-11-17T04:31:46.988Z] Copying: 73/1024 [MB] (17 MBps) [2024-11-17T04:31:48.373Z] Copying: 92/1024 [MB] (19 MBps) [2024-11-17T04:31:48.947Z] Copying: 104/1024 [MB] (11 MBps) [2024-11-17T04:31:50.333Z] Copying: 114/1024 [MB] (10 MBps) [2024-11-17T04:31:51.275Z] Copying: 125/1024 [MB] (10 MBps) [2024-11-17T04:31:52.220Z] Copying: 135/1024 [MB] (10 MBps) [2024-11-17T04:31:53.165Z] Copying: 150/1024 [MB] (14 MBps) [2024-11-17T04:31:54.111Z] Copying: 161/1024 [MB] (10 MBps) [2024-11-17T04:31:55.064Z] Copying: 176/1024 [MB] (15 MBps) [2024-11-17T04:31:56.007Z] Copying: 187/1024 [MB] (10 MBps) [2024-11-17T04:31:56.951Z] Copying: 198/1024 [MB] (10 MBps) [2024-11-17T04:31:58.340Z] Copying: 208/1024 [MB] (10 MBps) [2024-11-17T04:31:59.285Z] Copying: 225/1024 [MB] (17 MBps) [2024-11-17T04:32:00.229Z] Copying: 236/1024 [MB] (10 MBps) [2024-11-17T04:32:01.176Z] Copying: 246/1024 [MB] (10 MBps) [2024-11-17T04:32:02.121Z] Copying: 257/1024 [MB] (11 MBps) [2024-11-17T04:32:03.064Z] Copying: 268/1024 [MB] (10 MBps) [2024-11-17T04:32:04.010Z] Copying: 288/1024 [MB] (19 MBps) [2024-11-17T04:32:05.102Z] Copying: 321/1024 [MB] (33 MBps) [2024-11-17T04:32:06.046Z] Copying: 331/1024 [MB] (10 MBps) [2024-11-17T04:32:06.990Z] Copying: 342/1024 [MB] (10 MBps) [2024-11-17T04:32:08.375Z] Copying: 353/1024 [MB] (11 MBps) [2024-11-17T04:32:08.947Z] Copying: 370/1024 [MB] (16 MBps) [2024-11-17T04:32:10.330Z] Copying: 392/1024 [MB] (21 MBps) [2024-11-17T04:32:11.275Z] Copying: 408/1024 [MB] (16 MBps) [2024-11-17T04:32:12.222Z] Copying: 423/1024 [MB] (14 MBps) [2024-11-17T04:32:13.164Z] Copying: 438/1024 [MB] (15 MBps) [2024-11-17T04:32:14.109Z] Copying: 448/1024 [MB] (10 MBps) [2024-11-17T04:32:15.052Z] Copying: 461/1024 [MB] (12 MBps) [2024-11-17T04:32:15.997Z] Copying: 479/1024 [MB] (17 MBps) [2024-11-17T04:32:17.384Z] Copying: 489/1024 [MB] (10 MBps) [2024-11-17T04:32:17.958Z] Copying: 507/1024 [MB] (17 MBps) [2024-11-17T04:32:19.348Z] Copying: 526/1024 [MB] (18 MBps) [2024-11-17T04:32:20.294Z] Copying: 538/1024 [MB] (12 MBps) [2024-11-17T04:32:21.240Z] Copying: 553/1024 [MB] (14 MBps) [2024-11-17T04:32:22.184Z] Copying: 570/1024 [MB] (17 MBps) [2024-11-17T04:32:23.127Z] Copying: 585/1024 [MB] (14 MBps) [2024-11-17T04:32:24.073Z] Copying: 603/1024 [MB] (18 MBps) [2024-11-17T04:32:25.017Z] Copying: 619/1024 [MB] (15 MBps) [2024-11-17T04:32:25.962Z] Copying: 629/1024 [MB] (10 MBps) [2024-11-17T04:32:27.350Z] Copying: 640/1024 [MB] (10 MBps) [2024-11-17T04:32:28.304Z] Copying: 654/1024 [MB] (13 MBps) [2024-11-17T04:32:29.248Z] Copying: 664/1024 [MB] (10 MBps) [2024-11-17T04:32:30.189Z] Copying: 682/1024 [MB] (18 MBps) [2024-11-17T04:32:31.161Z] Copying: 693/1024 [MB] (11 MBps) [2024-11-17T04:32:32.108Z] Copying: 704/1024 [MB] (10 MBps) [2024-11-17T04:32:33.049Z] Copying: 716/1024 [MB] (12 MBps) [2024-11-17T04:32:33.988Z] Copying: 727/1024 [MB] (10 MBps) [2024-11-17T04:32:35.374Z] Copying: 749/1024 [MB] (22 MBps) [2024-11-17T04:32:35.945Z] Copying: 763/1024 [MB] (14 MBps) [2024-11-17T04:32:37.327Z] Copying: 783/1024 [MB] (19 MBps) [2024-11-17T04:32:38.267Z] Copying: 804/1024 [MB] (20 MBps) [2024-11-17T04:32:39.207Z] Copying: 819/1024 [MB] (15 MBps) [2024-11-17T04:32:40.149Z] Copying: 843/1024 [MB] (24 MBps) [2024-11-17T04:32:41.094Z] Copying: 864/1024 [MB] (20 MBps) [2024-11-17T04:32:42.035Z] Copying: 885/1024 [MB] (21 MBps) [2024-11-17T04:32:42.978Z] Copying: 908/1024 [MB] (22 MBps) [2024-11-17T04:32:44.365Z] Copying: 931/1024 [MB] (23 MBps) [2024-11-17T04:32:45.308Z] Copying: 950/1024 [MB] (19 MBps) [2024-11-17T04:32:46.254Z] Copying: 963/1024 [MB] (12 MBps) [2024-11-17T04:32:47.200Z] Copying: 978/1024 [MB] (15 MBps) [2024-11-17T04:32:48.145Z] Copying: 989/1024 [MB] (10 MBps) [2024-11-17T04:32:49.090Z] Copying: 999/1024 [MB] (10 MBps) [2024-11-17T04:32:49.090Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-17 04:32:48.792080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.363 [2024-11-17 04:32:48.792186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:03.363 [2024-11-17 04:32:48.792210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:03.363 [2024-11-17 04:32:48.792233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.363 [2024-11-17 04:32:48.792272] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:03.363 [2024-11-17 04:32:48.793057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.363 [2024-11-17 04:32:48.793107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:03.363 [2024-11-17 04:32:48.793121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:27:03.363 [2024-11-17 04:32:48.793132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.363 [2024-11-17 04:32:48.793469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.363 [2024-11-17 04:32:48.793486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:03.363 [2024-11-17 04:32:48.793500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:27:03.363 [2024-11-17 04:32:48.793517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.363 [2024-11-17 04:32:48.799280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.363 [2024-11-17 04:32:48.799307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:03.363 [2024-11-17 04:32:48.799317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.740 ms 00:27:03.363 [2024-11-17 04:32:48.799325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.363 [2024-11-17 04:32:48.805533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.363 [2024-11-17 04:32:48.805564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:03.363 [2024-11-17 04:32:48.805584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.189 ms 00:27:03.363 [2024-11-17 04:32:48.805591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.363 [2024-11-17 04:32:48.808094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.363 [2024-11-17 04:32:48.808134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:03.363 [2024-11-17 04:32:48.808143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.449 ms 00:27:03.363 [2024-11-17 04:32:48.808151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.363 [2024-11-17 04:32:48.811978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.363 [2024-11-17 04:32:48.812019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:03.363 [2024-11-17 04:32:48.812029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.791 ms 00:27:03.363 [2024-11-17 04:32:48.812037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.363 [2024-11-17 04:32:48.816547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.363 [2024-11-17 04:32:48.816586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:03.363 [2024-11-17 04:32:48.816596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.473 ms 00:27:03.363 [2024-11-17 04:32:48.816615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.363 [2024-11-17 04:32:48.819316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.363 [2024-11-17 04:32:48.819476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:03.363 [2024-11-17 04:32:48.819493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.679 ms 00:27:03.363 [2024-11-17 04:32:48.819501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.363 [2024-11-17 04:32:48.822694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.363 [2024-11-17 04:32:48.822798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:03.363 [2024-11-17 04:32:48.822831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.148 ms 00:27:03.363 [2024-11-17 04:32:48.822855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.363 [2024-11-17 04:32:48.825600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.363 [2024-11-17 04:32:48.825684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:03.363 [2024-11-17 04:32:48.825710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.656 ms 00:27:03.363 [2024-11-17 04:32:48.825730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.363 [2024-11-17 04:32:48.828469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.363 [2024-11-17 04:32:48.828567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:03.363 [2024-11-17 04:32:48.828597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.549 ms 00:27:03.363 [2024-11-17 04:32:48.828622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.363 [2024-11-17 04:32:48.828699] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:03.363 [2024-11-17 04:32:48.828742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:03.363 [2024-11-17 04:32:48.828773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:03.363 [2024-11-17 04:32:48.828797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.828821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.828845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.828868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.828892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.828915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.828938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.828962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.828985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:03.363 [2024-11-17 04:32:48.829756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.829780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.829803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.829826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.829852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.829875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.829898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.829920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.829943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.829966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.829989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.830983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.831006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.831029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.831052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.831076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.831099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.831122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.831145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.831168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.831191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:03.364 [2024-11-17 04:32:48.831238] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:03.364 [2024-11-17 04:32:48.831262] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4e176443-4201-40c4-8cf1-bf8a3448302f 00:27:03.364 [2024-11-17 04:32:48.831286] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:03.364 [2024-11-17 04:32:48.831307] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:03.364 [2024-11-17 04:32:48.831329] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:03.364 [2024-11-17 04:32:48.831351] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:03.364 [2024-11-17 04:32:48.831372] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:03.364 [2024-11-17 04:32:48.831418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:03.364 [2024-11-17 04:32:48.831460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:03.364 [2024-11-17 04:32:48.831480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:03.364 [2024-11-17 04:32:48.831500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:03.364 [2024-11-17 04:32:48.831537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.364 [2024-11-17 04:32:48.831567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:03.364 [2024-11-17 04:32:48.831591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.840 ms 00:27:03.364 [2024-11-17 04:32:48.831623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.364 [2024-11-17 04:32:48.833862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.364 [2024-11-17 04:32:48.833887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:03.364 [2024-11-17 04:32:48.833898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.197 ms 00:27:03.364 [2024-11-17 04:32:48.833906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.364 [2024-11-17 04:32:48.834021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.364 [2024-11-17 04:32:48.834030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:03.364 [2024-11-17 04:32:48.834043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:27:03.364 [2024-11-17 04:32:48.834051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.364 [2024-11-17 04:32:48.840152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.364 [2024-11-17 04:32:48.840195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:03.364 [2024-11-17 04:32:48.840205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.364 [2024-11-17 04:32:48.840216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.364 [2024-11-17 04:32:48.840272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.364 [2024-11-17 04:32:48.840280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:03.364 [2024-11-17 04:32:48.840288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.364 [2024-11-17 04:32:48.840295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.364 [2024-11-17 04:32:48.840356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.364 [2024-11-17 04:32:48.840366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:03.364 [2024-11-17 04:32:48.840397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.364 [2024-11-17 04:32:48.840405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.364 [2024-11-17 04:32:48.840424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.365 [2024-11-17 04:32:48.840432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:03.365 [2024-11-17 04:32:48.840440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.365 [2024-11-17 04:32:48.840447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.365 [2024-11-17 04:32:48.852231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.365 [2024-11-17 04:32:48.852451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:03.365 [2024-11-17 04:32:48.852472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.365 [2024-11-17 04:32:48.852488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.365 [2024-11-17 04:32:48.862423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.365 [2024-11-17 04:32:48.862469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:03.365 [2024-11-17 04:32:48.862481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.365 [2024-11-17 04:32:48.862490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.365 [2024-11-17 04:32:48.862577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.365 [2024-11-17 04:32:48.862587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:03.365 [2024-11-17 04:32:48.862597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.365 [2024-11-17 04:32:48.862605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.365 [2024-11-17 04:32:48.862644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.365 [2024-11-17 04:32:48.862653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:03.365 [2024-11-17 04:32:48.862661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.365 [2024-11-17 04:32:48.862669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.365 [2024-11-17 04:32:48.862748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.365 [2024-11-17 04:32:48.862759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:03.365 [2024-11-17 04:32:48.862768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.365 [2024-11-17 04:32:48.862776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.365 [2024-11-17 04:32:48.862809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.365 [2024-11-17 04:32:48.862821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:03.365 [2024-11-17 04:32:48.862833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.365 [2024-11-17 04:32:48.862842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.365 [2024-11-17 04:32:48.862881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.365 [2024-11-17 04:32:48.862890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:03.365 [2024-11-17 04:32:48.862899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.365 [2024-11-17 04:32:48.862907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.365 [2024-11-17 04:32:48.862957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.365 [2024-11-17 04:32:48.862971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:03.365 [2024-11-17 04:32:48.862980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.365 [2024-11-17 04:32:48.862988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.365 [2024-11-17 04:32:48.863114] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.018 ms, result 0 00:27:03.365 00:27:03.365 00:27:03.637 04:32:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:06.180 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:06.180 Process with pid 88929 is not found 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 88929 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 88929 ']' 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 88929 00:27:06.180 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88929) - No such process 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 88929 is not found' 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:06.180 Remove shared memory files 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:06.180 ************************************ 00:27:06.180 END TEST ftl_dirty_shutdown 00:27:06.180 ************************************ 00:27:06.180 00:27:06.180 real 4m16.958s 00:27:06.180 user 4m50.829s 00:27:06.180 sys 0m29.696s 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:06.180 04:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:06.441 04:32:51 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:06.441 04:32:51 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:06.441 04:32:51 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:06.441 04:32:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:06.441 ************************************ 00:27:06.441 START TEST ftl_upgrade_shutdown 00:27:06.441 ************************************ 00:27:06.441 04:32:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:06.441 * Looking for test storage... 00:27:06.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:06.441 04:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:06.441 04:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:06.441 04:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:06.441 04:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:06.441 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:06.441 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:06.441 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:06.441 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:06.441 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:06.441 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:06.441 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:06.441 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:06.441 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:06.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.442 --rc genhtml_branch_coverage=1 00:27:06.442 --rc genhtml_function_coverage=1 00:27:06.442 --rc genhtml_legend=1 00:27:06.442 --rc geninfo_all_blocks=1 00:27:06.442 --rc geninfo_unexecuted_blocks=1 00:27:06.442 00:27:06.442 ' 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:06.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.442 --rc genhtml_branch_coverage=1 00:27:06.442 --rc genhtml_function_coverage=1 00:27:06.442 --rc genhtml_legend=1 00:27:06.442 --rc geninfo_all_blocks=1 00:27:06.442 --rc geninfo_unexecuted_blocks=1 00:27:06.442 00:27:06.442 ' 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:06.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.442 --rc genhtml_branch_coverage=1 00:27:06.442 --rc genhtml_function_coverage=1 00:27:06.442 --rc genhtml_legend=1 00:27:06.442 --rc geninfo_all_blocks=1 00:27:06.442 --rc geninfo_unexecuted_blocks=1 00:27:06.442 00:27:06.442 ' 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:06.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.442 --rc genhtml_branch_coverage=1 00:27:06.442 --rc genhtml_function_coverage=1 00:27:06.442 --rc genhtml_legend=1 00:27:06.442 --rc geninfo_all_blocks=1 00:27:06.442 --rc geninfo_unexecuted_blocks=1 00:27:06.442 00:27:06.442 ' 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91689 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91689 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91689 ']' 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:06.442 04:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:06.704 [2024-11-17 04:32:52.249557] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:06.704 [2024-11-17 04:32:52.250085] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91689 ] 00:27:06.704 [2024-11-17 04:32:52.416025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.965 [2024-11-17 04:32:52.448983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:07.538 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:07.800 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:07.800 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:07.800 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:07.800 04:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:27:07.800 04:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:07.800 04:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:07.800 04:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:07.800 04:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:08.061 04:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:08.061 { 00:27:08.061 "name": "basen1", 00:27:08.061 "aliases": [ 00:27:08.061 "89c5647f-5c8f-4f0b-8e93-399139df6cfb" 00:27:08.061 ], 00:27:08.061 "product_name": "NVMe disk", 00:27:08.061 "block_size": 4096, 00:27:08.061 "num_blocks": 1310720, 00:27:08.061 "uuid": "89c5647f-5c8f-4f0b-8e93-399139df6cfb", 00:27:08.061 "numa_id": -1, 00:27:08.061 "assigned_rate_limits": { 00:27:08.061 "rw_ios_per_sec": 0, 00:27:08.061 "rw_mbytes_per_sec": 0, 00:27:08.061 "r_mbytes_per_sec": 0, 00:27:08.061 "w_mbytes_per_sec": 0 00:27:08.061 }, 00:27:08.061 "claimed": true, 00:27:08.061 "claim_type": "read_many_write_one", 00:27:08.061 "zoned": false, 00:27:08.061 "supported_io_types": { 00:27:08.061 "read": true, 00:27:08.061 "write": true, 00:27:08.061 "unmap": true, 00:27:08.061 "flush": true, 00:27:08.061 "reset": true, 00:27:08.061 "nvme_admin": true, 00:27:08.061 "nvme_io": true, 00:27:08.061 "nvme_io_md": false, 00:27:08.061 "write_zeroes": true, 00:27:08.061 "zcopy": false, 00:27:08.061 "get_zone_info": false, 00:27:08.061 "zone_management": false, 00:27:08.061 "zone_append": false, 00:27:08.061 "compare": true, 00:27:08.061 "compare_and_write": false, 00:27:08.061 "abort": true, 00:27:08.061 "seek_hole": false, 00:27:08.061 "seek_data": false, 00:27:08.061 "copy": true, 00:27:08.061 "nvme_iov_md": false 00:27:08.061 }, 00:27:08.061 "driver_specific": { 00:27:08.061 "nvme": [ 00:27:08.061 { 00:27:08.061 "pci_address": "0000:00:11.0", 00:27:08.061 "trid": { 00:27:08.061 "trtype": "PCIe", 00:27:08.061 "traddr": "0000:00:11.0" 00:27:08.061 }, 00:27:08.061 "ctrlr_data": { 00:27:08.061 "cntlid": 0, 00:27:08.061 "vendor_id": "0x1b36", 00:27:08.061 "model_number": "QEMU NVMe Ctrl", 00:27:08.061 "serial_number": "12341", 00:27:08.061 "firmware_revision": "8.0.0", 00:27:08.061 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:08.061 "oacs": { 00:27:08.061 "security": 0, 00:27:08.061 "format": 1, 00:27:08.061 "firmware": 0, 00:27:08.061 "ns_manage": 1 00:27:08.061 }, 00:27:08.061 "multi_ctrlr": false, 00:27:08.061 "ana_reporting": false 00:27:08.061 }, 00:27:08.061 "vs": { 00:27:08.061 "nvme_version": "1.4" 00:27:08.061 }, 00:27:08.061 "ns_data": { 00:27:08.061 "id": 1, 00:27:08.061 "can_share": false 00:27:08.061 } 00:27:08.061 } 00:27:08.061 ], 00:27:08.061 "mp_policy": "active_passive" 00:27:08.061 } 00:27:08.061 } 00:27:08.061 ]' 00:27:08.061 04:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:08.061 04:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:08.061 04:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:08.061 04:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:08.061 04:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:08.061 04:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:08.061 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:08.061 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:08.061 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:08.061 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:08.061 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:08.322 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=9c5670b4-e276-4e77-b33e-5a1ab3e125d7 00:27:08.322 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:08.322 04:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9c5670b4-e276-4e77-b33e-5a1ab3e125d7 00:27:08.582 04:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:08.843 04:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=90f3338a-e671-44d4-8df1-96aabbf38788 00:27:08.843 04:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 90f3338a-e671-44d4-8df1-96aabbf38788 00:27:09.103 04:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=f98b9cfd-e1d0-427e-add2-b846d18902bb 00:27:09.103 04:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z f98b9cfd-e1d0-427e-add2-b846d18902bb ]] 00:27:09.103 04:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 f98b9cfd-e1d0-427e-add2-b846d18902bb 5120 00:27:09.103 04:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:09.103 04:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:09.103 04:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=f98b9cfd-e1d0-427e-add2-b846d18902bb 00:27:09.103 04:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:09.103 04:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size f98b9cfd-e1d0-427e-add2-b846d18902bb 00:27:09.103 04:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f98b9cfd-e1d0-427e-add2-b846d18902bb 00:27:09.103 04:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:09.103 04:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:09.103 04:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:09.103 04:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f98b9cfd-e1d0-427e-add2-b846d18902bb 00:27:09.362 04:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:09.362 { 00:27:09.362 "name": "f98b9cfd-e1d0-427e-add2-b846d18902bb", 00:27:09.362 "aliases": [ 00:27:09.362 "lvs/basen1p0" 00:27:09.362 ], 00:27:09.362 "product_name": "Logical Volume", 00:27:09.362 "block_size": 4096, 00:27:09.362 "num_blocks": 5242880, 00:27:09.362 "uuid": "f98b9cfd-e1d0-427e-add2-b846d18902bb", 00:27:09.362 "assigned_rate_limits": { 00:27:09.362 "rw_ios_per_sec": 0, 00:27:09.362 "rw_mbytes_per_sec": 0, 00:27:09.362 "r_mbytes_per_sec": 0, 00:27:09.362 "w_mbytes_per_sec": 0 00:27:09.362 }, 00:27:09.362 "claimed": false, 00:27:09.362 "zoned": false, 00:27:09.362 "supported_io_types": { 00:27:09.362 "read": true, 00:27:09.362 "write": true, 00:27:09.362 "unmap": true, 00:27:09.362 "flush": false, 00:27:09.362 "reset": true, 00:27:09.362 "nvme_admin": false, 00:27:09.362 "nvme_io": false, 00:27:09.362 "nvme_io_md": false, 00:27:09.362 "write_zeroes": true, 00:27:09.362 "zcopy": false, 00:27:09.362 "get_zone_info": false, 00:27:09.362 "zone_management": false, 00:27:09.362 "zone_append": false, 00:27:09.362 "compare": false, 00:27:09.362 "compare_and_write": false, 00:27:09.362 "abort": false, 00:27:09.362 "seek_hole": true, 00:27:09.362 "seek_data": true, 00:27:09.362 "copy": false, 00:27:09.362 "nvme_iov_md": false 00:27:09.362 }, 00:27:09.362 "driver_specific": { 00:27:09.362 "lvol": { 00:27:09.362 "lvol_store_uuid": "90f3338a-e671-44d4-8df1-96aabbf38788", 00:27:09.362 "base_bdev": "basen1", 00:27:09.362 "thin_provision": true, 00:27:09.362 "num_allocated_clusters": 0, 00:27:09.362 "snapshot": false, 00:27:09.362 "clone": false, 00:27:09.362 "esnap_clone": false 00:27:09.362 } 00:27:09.362 } 00:27:09.362 } 00:27:09.362 ]' 00:27:09.362 04:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:09.362 04:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:09.362 04:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:09.362 04:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:27:09.362 04:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:27:09.362 04:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:27:09.362 04:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:09.362 04:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:09.362 04:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:09.621 04:32:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:09.621 04:32:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:09.621 04:32:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:09.621 04:32:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:09.621 04:32:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:09.621 04:32:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d f98b9cfd-e1d0-427e-add2-b846d18902bb -c cachen1p0 --l2p_dram_limit 2 00:27:09.880 [2024-11-17 04:32:55.506108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.880 [2024-11-17 04:32:55.506148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:09.880 [2024-11-17 04:32:55.506159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:09.880 [2024-11-17 04:32:55.506169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.880 [2024-11-17 04:32:55.506207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.880 [2024-11-17 04:32:55.506217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:09.880 [2024-11-17 04:32:55.506225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:09.880 [2024-11-17 04:32:55.506236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.880 [2024-11-17 04:32:55.506251] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:09.880 [2024-11-17 04:32:55.506484] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:09.880 [2024-11-17 04:32:55.506498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.880 [2024-11-17 04:32:55.506505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:09.880 [2024-11-17 04:32:55.506512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:27:09.880 [2024-11-17 04:32:55.506521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.880 [2024-11-17 04:32:55.506568] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID d01a48cf-805b-4cfb-a942-9c3b9227f856 00:27:09.880 [2024-11-17 04:32:55.507508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.880 [2024-11-17 04:32:55.507533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:09.880 [2024-11-17 04:32:55.507545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:09.880 [2024-11-17 04:32:55.507551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.880 [2024-11-17 04:32:55.512327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.880 [2024-11-17 04:32:55.512355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:09.880 [2024-11-17 04:32:55.512364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.741 ms 00:27:09.880 [2024-11-17 04:32:55.512370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.880 [2024-11-17 04:32:55.512411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.880 [2024-11-17 04:32:55.512422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:09.880 [2024-11-17 04:32:55.512430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:09.880 [2024-11-17 04:32:55.512438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.880 [2024-11-17 04:32:55.512477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.880 [2024-11-17 04:32:55.512485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:09.880 [2024-11-17 04:32:55.512492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:09.880 [2024-11-17 04:32:55.512498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.880 [2024-11-17 04:32:55.512525] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:09.880 [2024-11-17 04:32:55.513796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.880 [2024-11-17 04:32:55.513822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:09.880 [2024-11-17 04:32:55.513830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.286 ms 00:27:09.880 [2024-11-17 04:32:55.513839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.880 [2024-11-17 04:32:55.513859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.880 [2024-11-17 04:32:55.513867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:09.880 [2024-11-17 04:32:55.513873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:09.880 [2024-11-17 04:32:55.513881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.880 [2024-11-17 04:32:55.513894] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:09.880 [2024-11-17 04:32:55.513998] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:09.880 [2024-11-17 04:32:55.514008] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:09.880 [2024-11-17 04:32:55.514017] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:09.880 [2024-11-17 04:32:55.514025] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:09.880 [2024-11-17 04:32:55.514036] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:09.880 [2024-11-17 04:32:55.514042] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:09.880 [2024-11-17 04:32:55.514057] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:09.880 [2024-11-17 04:32:55.514062] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:09.880 [2024-11-17 04:32:55.514069] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:09.880 [2024-11-17 04:32:55.514075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.880 [2024-11-17 04:32:55.514081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:09.880 [2024-11-17 04:32:55.514088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.182 ms 00:27:09.880 [2024-11-17 04:32:55.514095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.880 [2024-11-17 04:32:55.514157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.880 [2024-11-17 04:32:55.514166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:09.880 [2024-11-17 04:32:55.514172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:09.880 [2024-11-17 04:32:55.514178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.880 [2024-11-17 04:32:55.514257] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:09.880 [2024-11-17 04:32:55.514267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:09.880 [2024-11-17 04:32:55.514273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:09.880 [2024-11-17 04:32:55.514280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.880 [2024-11-17 04:32:55.514286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:09.880 [2024-11-17 04:32:55.514293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:09.880 [2024-11-17 04:32:55.514298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:09.880 [2024-11-17 04:32:55.514304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:09.880 [2024-11-17 04:32:55.514309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:09.880 [2024-11-17 04:32:55.514316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.880 [2024-11-17 04:32:55.514321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:09.880 [2024-11-17 04:32:55.514328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:09.880 [2024-11-17 04:32:55.514332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.880 [2024-11-17 04:32:55.514340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:09.880 [2024-11-17 04:32:55.514346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:09.880 [2024-11-17 04:32:55.514352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.880 [2024-11-17 04:32:55.514357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:09.880 [2024-11-17 04:32:55.514364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:09.880 [2024-11-17 04:32:55.514368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.880 [2024-11-17 04:32:55.514390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:09.880 [2024-11-17 04:32:55.514396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:09.880 [2024-11-17 04:32:55.514402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:09.880 [2024-11-17 04:32:55.514407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:09.880 [2024-11-17 04:32:55.514414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:09.881 [2024-11-17 04:32:55.514419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:09.881 [2024-11-17 04:32:55.514426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:09.881 [2024-11-17 04:32:55.514431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:09.881 [2024-11-17 04:32:55.514438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:09.881 [2024-11-17 04:32:55.514443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:09.881 [2024-11-17 04:32:55.514452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:09.881 [2024-11-17 04:32:55.514457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:09.881 [2024-11-17 04:32:55.514464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:09.881 [2024-11-17 04:32:55.514470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:09.881 [2024-11-17 04:32:55.514477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.881 [2024-11-17 04:32:55.514482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:09.881 [2024-11-17 04:32:55.514490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:09.881 [2024-11-17 04:32:55.514496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.881 [2024-11-17 04:32:55.514503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:09.881 [2024-11-17 04:32:55.514508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:09.881 [2024-11-17 04:32:55.514515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.881 [2024-11-17 04:32:55.514521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:09.881 [2024-11-17 04:32:55.514528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:09.881 [2024-11-17 04:32:55.514534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.881 [2024-11-17 04:32:55.514541] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:09.881 [2024-11-17 04:32:55.514548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:09.881 [2024-11-17 04:32:55.514557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:09.881 [2024-11-17 04:32:55.514563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.881 [2024-11-17 04:32:55.514572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:09.881 [2024-11-17 04:32:55.514578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:09.881 [2024-11-17 04:32:55.514585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:09.881 [2024-11-17 04:32:55.514590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:09.881 [2024-11-17 04:32:55.514597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:09.881 [2024-11-17 04:32:55.514603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:09.881 [2024-11-17 04:32:55.514613] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:09.881 [2024-11-17 04:32:55.514622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:09.881 [2024-11-17 04:32:55.514633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:09.881 [2024-11-17 04:32:55.514639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:09.881 [2024-11-17 04:32:55.514648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:09.881 [2024-11-17 04:32:55.514655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:09.881 [2024-11-17 04:32:55.514662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:09.881 [2024-11-17 04:32:55.514668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:09.881 [2024-11-17 04:32:55.514677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:09.881 [2024-11-17 04:32:55.514683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:09.881 [2024-11-17 04:32:55.514690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:09.881 [2024-11-17 04:32:55.514696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:09.881 [2024-11-17 04:32:55.514703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:09.881 [2024-11-17 04:32:55.514710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:09.881 [2024-11-17 04:32:55.514717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:09.881 [2024-11-17 04:32:55.514723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:09.881 [2024-11-17 04:32:55.514730] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:09.881 [2024-11-17 04:32:55.514737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:09.881 [2024-11-17 04:32:55.514745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:09.881 [2024-11-17 04:32:55.514751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:09.881 [2024-11-17 04:32:55.514759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:09.881 [2024-11-17 04:32:55.514765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:09.881 [2024-11-17 04:32:55.514773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.881 [2024-11-17 04:32:55.514779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:09.881 [2024-11-17 04:32:55.514788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.565 ms 00:27:09.881 [2024-11-17 04:32:55.514796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.881 [2024-11-17 04:32:55.514834] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:09.881 [2024-11-17 04:32:55.514841] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:13.176 [2024-11-17 04:32:58.297868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.176 [2024-11-17 04:32:58.297916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:13.176 [2024-11-17 04:32:58.297930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2783.021 ms 00:27:13.176 [2024-11-17 04:32:58.297941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.176 [2024-11-17 04:32:58.305290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.176 [2024-11-17 04:32:58.305325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:13.176 [2024-11-17 04:32:58.305334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.280 ms 00:27:13.176 [2024-11-17 04:32:58.305341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.176 [2024-11-17 04:32:58.305387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.176 [2024-11-17 04:32:58.305394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:13.176 [2024-11-17 04:32:58.305408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:13.176 [2024-11-17 04:32:58.305414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.176 [2024-11-17 04:32:58.312796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.176 [2024-11-17 04:32:58.312828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:13.176 [2024-11-17 04:32:58.312838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.347 ms 00:27:13.176 [2024-11-17 04:32:58.312845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.176 [2024-11-17 04:32:58.312871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.176 [2024-11-17 04:32:58.312877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:13.176 [2024-11-17 04:32:58.312885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:13.176 [2024-11-17 04:32:58.312893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.176 [2024-11-17 04:32:58.313188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.176 [2024-11-17 04:32:58.313201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:13.176 [2024-11-17 04:32:58.313209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.262 ms 00:27:13.176 [2024-11-17 04:32:58.313215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.176 [2024-11-17 04:32:58.313248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.176 [2024-11-17 04:32:58.313256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:13.176 [2024-11-17 04:32:58.313267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:13.176 [2024-11-17 04:32:58.313273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.176 [2024-11-17 04:32:58.318137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.176 [2024-11-17 04:32:58.318164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:13.176 [2024-11-17 04:32:58.318172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.849 ms 00:27:13.176 [2024-11-17 04:32:58.318181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.176 [2024-11-17 04:32:58.324753] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:13.176 [2024-11-17 04:32:58.325464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.176 [2024-11-17 04:32:58.325489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:13.176 [2024-11-17 04:32:58.325496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.234 ms 00:27:13.176 [2024-11-17 04:32:58.325503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.176 [2024-11-17 04:32:58.344913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.176 [2024-11-17 04:32:58.344978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:13.176 [2024-11-17 04:32:58.345001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.390 ms 00:27:13.176 [2024-11-17 04:32:58.345019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.176 [2024-11-17 04:32:58.345135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.176 [2024-11-17 04:32:58.345155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:13.176 [2024-11-17 04:32:58.345175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:27:13.176 [2024-11-17 04:32:58.345189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.176 [2024-11-17 04:32:58.348991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.176 [2024-11-17 04:32:58.349043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:13.177 [2024-11-17 04:32:58.349058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.775 ms 00:27:13.177 [2024-11-17 04:32:58.349076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.177 [2024-11-17 04:32:58.351747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.177 [2024-11-17 04:32:58.351776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:13.177 [2024-11-17 04:32:58.351784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.630 ms 00:27:13.177 [2024-11-17 04:32:58.351794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.177 [2024-11-17 04:32:58.352015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.177 [2024-11-17 04:32:58.352030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:13.177 [2024-11-17 04:32:58.352037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:27:13.177 [2024-11-17 04:32:58.352051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.177 [2024-11-17 04:32:58.378540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.177 [2024-11-17 04:32:58.378573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:13.177 [2024-11-17 04:32:58.378581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.464 ms 00:27:13.177 [2024-11-17 04:32:58.378591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.177 [2024-11-17 04:32:58.382002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.177 [2024-11-17 04:32:58.382034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:13.177 [2024-11-17 04:32:58.382043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.376 ms 00:27:13.177 [2024-11-17 04:32:58.382055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.177 [2024-11-17 04:32:58.385516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.177 [2024-11-17 04:32:58.385547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:13.177 [2024-11-17 04:32:58.385554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.433 ms 00:27:13.177 [2024-11-17 04:32:58.385561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.177 [2024-11-17 04:32:58.388988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.177 [2024-11-17 04:32:58.389018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:13.177 [2024-11-17 04:32:58.389025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.401 ms 00:27:13.177 [2024-11-17 04:32:58.389034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.177 [2024-11-17 04:32:58.389064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.177 [2024-11-17 04:32:58.389072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:13.177 [2024-11-17 04:32:58.389079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:13.177 [2024-11-17 04:32:58.389089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.177 [2024-11-17 04:32:58.389137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.177 [2024-11-17 04:32:58.389146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:13.177 [2024-11-17 04:32:58.389153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:27:13.177 [2024-11-17 04:32:58.389160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.177 [2024-11-17 04:32:58.389882] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2883.472 ms, result 0 00:27:13.177 { 00:27:13.177 "name": "ftl", 00:27:13.177 "uuid": "d01a48cf-805b-4cfb-a942-9c3b9227f856" 00:27:13.177 } 00:27:13.177 04:32:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:13.177 [2024-11-17 04:32:58.590271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.177 04:32:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:13.177 04:32:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:13.435 [2024-11-17 04:32:58.986531] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:13.435 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:13.692 [2024-11-17 04:32:59.186799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:13.692 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:13.950 Fill FTL, iteration 1 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=91804 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 91804 /var/tmp/spdk.tgt.sock 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91804 ']' 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:13.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:13.950 04:32:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:13.950 [2024-11-17 04:32:59.593272] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:13.950 [2024-11-17 04:32:59.593403] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91804 ] 00:27:14.209 [2024-11-17 04:32:59.751529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.209 [2024-11-17 04:32:59.769452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.779 04:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:14.779 04:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:14.779 04:33:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:15.036 ftln1 00:27:15.036 04:33:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:15.036 04:33:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:15.293 04:33:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:15.293 04:33:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 91804 00:27:15.293 04:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91804 ']' 00:27:15.293 04:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91804 00:27:15.293 04:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:15.293 04:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:15.293 04:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91804 00:27:15.293 04:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:15.293 04:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:15.293 killing process with pid 91804 00:27:15.293 04:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91804' 00:27:15.293 04:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91804 00:27:15.293 04:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91804 00:27:15.551 04:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:15.551 04:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:15.551 [2024-11-17 04:33:01.231847] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:15.551 [2024-11-17 04:33:01.231972] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91836 ] 00:27:15.808 [2024-11-17 04:33:01.391275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.808 [2024-11-17 04:33:01.408668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.185  [2024-11-17T04:33:03.845Z] Copying: 219/1024 [MB] (219 MBps) [2024-11-17T04:33:04.827Z] Copying: 466/1024 [MB] (247 MBps) [2024-11-17T04:33:05.794Z] Copying: 700/1024 [MB] (234 MBps) [2024-11-17T04:33:06.052Z] Copying: 945/1024 [MB] (245 MBps) [2024-11-17T04:33:06.052Z] Copying: 1024/1024 [MB] (average 236 MBps) 00:27:20.325 00:27:20.584 04:33:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:20.584 Calculate MD5 checksum, iteration 1 00:27:20.584 04:33:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:20.584 04:33:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:20.584 04:33:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:20.584 04:33:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:20.584 04:33:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:20.584 04:33:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:20.584 04:33:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:20.584 [2024-11-17 04:33:06.134309] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:20.584 [2024-11-17 04:33:06.134461] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91893 ] 00:27:20.584 [2024-11-17 04:33:06.290865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.842 [2024-11-17 04:33:06.312436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.775  [2024-11-17T04:33:08.069Z] Copying: 647/1024 [MB] (647 MBps) [2024-11-17T04:33:08.329Z] Copying: 1024/1024 [MB] (average 648 MBps) 00:27:22.602 00:27:22.602 04:33:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:22.602 04:33:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:25.150 04:33:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:25.150 Fill FTL, iteration 2 00:27:25.150 04:33:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=bef38d87480ef5da75960611f34186c6 00:27:25.150 04:33:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:25.150 04:33:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:25.150 04:33:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:25.150 04:33:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:25.150 04:33:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:25.150 04:33:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:25.150 04:33:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:25.150 04:33:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:25.150 04:33:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:25.150 [2024-11-17 04:33:10.640725] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:25.150 [2024-11-17 04:33:10.641779] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91943 ] 00:27:25.150 [2024-11-17 04:33:10.803674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.150 [2024-11-17 04:33:10.846986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.536  [2024-11-17T04:33:13.202Z] Copying: 173/1024 [MB] (173 MBps) [2024-11-17T04:33:14.136Z] Copying: 370/1024 [MB] (197 MBps) [2024-11-17T04:33:15.071Z] Copying: 625/1024 [MB] (255 MBps) [2024-11-17T04:33:16.009Z] Copying: 880/1024 [MB] (255 MBps) [2024-11-17T04:33:16.009Z] Copying: 1024/1024 [MB] (average 222 MBps) 00:27:30.282 00:27:30.282 Calculate MD5 checksum, iteration 2 00:27:30.282 04:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:30.282 04:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:30.282 04:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:30.282 04:33:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:30.282 04:33:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:30.282 04:33:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:30.282 04:33:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:30.282 04:33:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:30.282 [2024-11-17 04:33:15.871614] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:30.282 [2024-11-17 04:33:15.871743] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91999 ] 00:27:30.543 [2024-11-17 04:33:16.028440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.543 [2024-11-17 04:33:16.058032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.932  [2024-11-17T04:33:18.227Z] Copying: 588/1024 [MB] (588 MBps) [2024-11-17T04:33:18.796Z] Copying: 1024/1024 [MB] (average 587 MBps) 00:27:33.069 00:27:33.069 04:33:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:33.069 04:33:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:35.619 04:33:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:35.619 04:33:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2ce031c653bffb387e210372449cacff 00:27:35.619 04:33:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:35.619 04:33:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:35.619 04:33:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:35.619 [2024-11-17 04:33:21.079709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.619 [2024-11-17 04:33:21.079764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:35.619 [2024-11-17 04:33:21.079777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:35.619 [2024-11-17 04:33:21.079789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.619 [2024-11-17 04:33:21.079810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.619 [2024-11-17 04:33:21.079817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:35.619 [2024-11-17 04:33:21.079825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:35.619 [2024-11-17 04:33:21.079832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.619 [2024-11-17 04:33:21.079849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.619 [2024-11-17 04:33:21.079856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:35.619 [2024-11-17 04:33:21.079863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:35.619 [2024-11-17 04:33:21.079872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.619 [2024-11-17 04:33:21.079926] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.209 ms, result 0 00:27:35.619 true 00:27:35.619 04:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:35.619 { 00:27:35.619 "name": "ftl", 00:27:35.619 "properties": [ 00:27:35.619 { 00:27:35.619 "name": "superblock_version", 00:27:35.619 "value": 5, 00:27:35.619 "read-only": true 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "name": "base_device", 00:27:35.619 "bands": [ 00:27:35.619 { 00:27:35.619 "id": 0, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 1, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 2, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 3, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 4, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 5, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 6, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 7, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 8, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 9, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 10, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 11, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 12, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 13, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 14, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 15, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 16, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 17, 00:27:35.619 "state": "FREE", 00:27:35.619 "validity": 0.0 00:27:35.619 } 00:27:35.619 ], 00:27:35.619 "read-only": true 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "name": "cache_device", 00:27:35.619 "type": "bdev", 00:27:35.619 "chunks": [ 00:27:35.619 { 00:27:35.619 "id": 0, 00:27:35.619 "state": "INACTIVE", 00:27:35.619 "utilization": 0.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 1, 00:27:35.619 "state": "CLOSED", 00:27:35.619 "utilization": 1.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 2, 00:27:35.619 "state": "CLOSED", 00:27:35.619 "utilization": 1.0 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 3, 00:27:35.619 "state": "OPEN", 00:27:35.619 "utilization": 0.001953125 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "id": 4, 00:27:35.619 "state": "OPEN", 00:27:35.619 "utilization": 0.0 00:27:35.619 } 00:27:35.619 ], 00:27:35.619 "read-only": true 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "name": "verbose_mode", 00:27:35.619 "value": true, 00:27:35.619 "unit": "", 00:27:35.619 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:35.619 }, 00:27:35.619 { 00:27:35.619 "name": "prep_upgrade_on_shutdown", 00:27:35.619 "value": false, 00:27:35.619 "unit": "", 00:27:35.619 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:35.619 } 00:27:35.619 ] 00:27:35.619 } 00:27:35.619 04:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:35.881 [2024-11-17 04:33:21.484018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.881 [2024-11-17 04:33:21.484166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:35.881 [2024-11-17 04:33:21.484215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:35.881 [2024-11-17 04:33:21.484234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.881 [2024-11-17 04:33:21.484265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.881 [2024-11-17 04:33:21.484282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:35.881 [2024-11-17 04:33:21.484297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:35.881 [2024-11-17 04:33:21.484312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.881 [2024-11-17 04:33:21.484336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.881 [2024-11-17 04:33:21.484352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:35.881 [2024-11-17 04:33:21.484368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:35.881 [2024-11-17 04:33:21.484428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.881 [2024-11-17 04:33:21.484490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.454 ms, result 0 00:27:35.881 true 00:27:35.881 04:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:35.881 04:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:35.881 04:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:36.142 04:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:36.142 04:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:36.142 04:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:36.403 [2024-11-17 04:33:21.900973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.403 [2024-11-17 04:33:21.901007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:36.403 [2024-11-17 04:33:21.901017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:36.403 [2024-11-17 04:33:21.901022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.403 [2024-11-17 04:33:21.901039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.403 [2024-11-17 04:33:21.901046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:36.403 [2024-11-17 04:33:21.901053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:36.403 [2024-11-17 04:33:21.901059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.403 [2024-11-17 04:33:21.901074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.403 [2024-11-17 04:33:21.901080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:36.403 [2024-11-17 04:33:21.901086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:36.403 [2024-11-17 04:33:21.901091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.403 [2024-11-17 04:33:21.901132] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.148 ms, result 0 00:27:36.403 true 00:27:36.403 04:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:36.403 { 00:27:36.403 "name": "ftl", 00:27:36.403 "properties": [ 00:27:36.403 { 00:27:36.403 "name": "superblock_version", 00:27:36.403 "value": 5, 00:27:36.403 "read-only": true 00:27:36.403 }, 00:27:36.403 { 00:27:36.403 "name": "base_device", 00:27:36.403 "bands": [ 00:27:36.403 { 00:27:36.403 "id": 0, 00:27:36.403 "state": "FREE", 00:27:36.403 "validity": 0.0 00:27:36.403 }, 00:27:36.403 { 00:27:36.403 "id": 1, 00:27:36.403 "state": "FREE", 00:27:36.403 "validity": 0.0 00:27:36.403 }, 00:27:36.403 { 00:27:36.403 "id": 2, 00:27:36.403 "state": "FREE", 00:27:36.403 "validity": 0.0 00:27:36.403 }, 00:27:36.403 { 00:27:36.403 "id": 3, 00:27:36.403 "state": "FREE", 00:27:36.403 "validity": 0.0 00:27:36.403 }, 00:27:36.403 { 00:27:36.403 "id": 4, 00:27:36.403 "state": "FREE", 00:27:36.403 "validity": 0.0 00:27:36.403 }, 00:27:36.403 { 00:27:36.403 "id": 5, 00:27:36.403 "state": "FREE", 00:27:36.403 "validity": 0.0 00:27:36.403 }, 00:27:36.403 { 00:27:36.403 "id": 6, 00:27:36.403 "state": "FREE", 00:27:36.403 "validity": 0.0 00:27:36.403 }, 00:27:36.403 { 00:27:36.403 "id": 7, 00:27:36.403 "state": "FREE", 00:27:36.403 "validity": 0.0 00:27:36.403 }, 00:27:36.403 { 00:27:36.403 "id": 8, 00:27:36.403 "state": "FREE", 00:27:36.403 "validity": 0.0 00:27:36.403 }, 00:27:36.403 { 00:27:36.403 "id": 9, 00:27:36.403 "state": "FREE", 00:27:36.403 "validity": 0.0 00:27:36.403 }, 00:27:36.403 { 00:27:36.403 "id": 10, 00:27:36.403 "state": "FREE", 00:27:36.403 "validity": 0.0 00:27:36.403 }, 00:27:36.403 { 00:27:36.403 "id": 11, 00:27:36.403 "state": "FREE", 00:27:36.403 "validity": 0.0 00:27:36.403 }, 00:27:36.403 { 00:27:36.403 "id": 12, 00:27:36.403 "state": "FREE", 00:27:36.403 "validity": 0.0 00:27:36.403 }, 00:27:36.403 { 00:27:36.403 "id": 13, 00:27:36.403 "state": "FREE", 00:27:36.403 "validity": 0.0 00:27:36.403 }, 00:27:36.403 { 00:27:36.403 "id": 14, 00:27:36.404 "state": "FREE", 00:27:36.404 "validity": 0.0 00:27:36.404 }, 00:27:36.404 { 00:27:36.404 "id": 15, 00:27:36.404 "state": "FREE", 00:27:36.404 "validity": 0.0 00:27:36.404 }, 00:27:36.404 { 00:27:36.404 "id": 16, 00:27:36.404 "state": "FREE", 00:27:36.404 "validity": 0.0 00:27:36.404 }, 00:27:36.404 { 00:27:36.404 "id": 17, 00:27:36.404 "state": "FREE", 00:27:36.404 "validity": 0.0 00:27:36.404 } 00:27:36.404 ], 00:27:36.404 "read-only": true 00:27:36.404 }, 00:27:36.404 { 00:27:36.404 "name": "cache_device", 00:27:36.404 "type": "bdev", 00:27:36.404 "chunks": [ 00:27:36.404 { 00:27:36.404 "id": 0, 00:27:36.404 "state": "INACTIVE", 00:27:36.404 "utilization": 0.0 00:27:36.404 }, 00:27:36.404 { 00:27:36.404 "id": 1, 00:27:36.404 "state": "CLOSED", 00:27:36.404 "utilization": 1.0 00:27:36.404 }, 00:27:36.404 { 00:27:36.404 "id": 2, 00:27:36.404 "state": "CLOSED", 00:27:36.404 "utilization": 1.0 00:27:36.404 }, 00:27:36.404 { 00:27:36.404 "id": 3, 00:27:36.404 "state": "OPEN", 00:27:36.404 "utilization": 0.001953125 00:27:36.404 }, 00:27:36.404 { 00:27:36.404 "id": 4, 00:27:36.404 "state": "OPEN", 00:27:36.404 "utilization": 0.0 00:27:36.404 } 00:27:36.404 ], 00:27:36.404 "read-only": true 00:27:36.404 }, 00:27:36.404 { 00:27:36.404 "name": "verbose_mode", 00:27:36.404 "value": true, 00:27:36.404 "unit": "", 00:27:36.404 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:36.404 }, 00:27:36.404 { 00:27:36.404 "name": "prep_upgrade_on_shutdown", 00:27:36.404 "value": true, 00:27:36.404 "unit": "", 00:27:36.404 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:36.404 } 00:27:36.404 ] 00:27:36.404 } 00:27:36.404 04:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:36.404 04:33:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91689 ]] 00:27:36.404 04:33:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91689 00:27:36.404 04:33:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91689 ']' 00:27:36.404 04:33:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91689 00:27:36.404 04:33:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:36.666 04:33:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.666 04:33:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91689 00:27:36.666 killing process with pid 91689 00:27:36.666 04:33:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:36.666 04:33:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:36.666 04:33:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91689' 00:27:36.666 04:33:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91689 00:27:36.666 04:33:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91689 00:27:36.666 [2024-11-17 04:33:22.260512] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:36.666 [2024-11-17 04:33:22.264754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.666 [2024-11-17 04:33:22.264785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:36.666 [2024-11-17 04:33:22.264796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:36.666 [2024-11-17 04:33:22.264803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.666 [2024-11-17 04:33:22.264820] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:36.666 [2024-11-17 04:33:22.265336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.666 [2024-11-17 04:33:22.265351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:36.666 [2024-11-17 04:33:22.265363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.504 ms 00:27:36.666 [2024-11-17 04:33:22.265369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.671 [2024-11-17 04:33:30.754351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.671 [2024-11-17 04:33:30.754421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:46.671 [2024-11-17 04:33:30.754434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8488.919 ms 00:27:46.671 [2024-11-17 04:33:30.754442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.671 [2024-11-17 04:33:30.755569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.671 [2024-11-17 04:33:30.755582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:46.671 [2024-11-17 04:33:30.755590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.114 ms 00:27:46.671 [2024-11-17 04:33:30.755596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.671 [2024-11-17 04:33:30.756454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.671 [2024-11-17 04:33:30.756601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:46.671 [2024-11-17 04:33:30.756619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.837 ms 00:27:46.671 [2024-11-17 04:33:30.756627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.671 [2024-11-17 04:33:30.757974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.671 [2024-11-17 04:33:30.758001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:46.671 [2024-11-17 04:33:30.758008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.317 ms 00:27:46.671 [2024-11-17 04:33:30.758014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.671 [2024-11-17 04:33:30.760120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.671 [2024-11-17 04:33:30.760227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:46.671 [2024-11-17 04:33:30.760239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.080 ms 00:27:46.671 [2024-11-17 04:33:30.760246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.671 [2024-11-17 04:33:30.760303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.671 [2024-11-17 04:33:30.760310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:46.671 [2024-11-17 04:33:30.760317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:46.671 [2024-11-17 04:33:30.760323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.671 [2024-11-17 04:33:30.761456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.671 [2024-11-17 04:33:30.761478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:46.671 [2024-11-17 04:33:30.761485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.114 ms 00:27:46.671 [2024-11-17 04:33:30.761491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.671 [2024-11-17 04:33:30.762545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.671 [2024-11-17 04:33:30.762570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:46.671 [2024-11-17 04:33:30.762576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.030 ms 00:27:46.671 [2024-11-17 04:33:30.762582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.671 [2024-11-17 04:33:30.763573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.671 [2024-11-17 04:33:30.763598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:46.671 [2024-11-17 04:33:30.763605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.967 ms 00:27:46.671 [2024-11-17 04:33:30.763610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.671 [2024-11-17 04:33:30.764369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.671 [2024-11-17 04:33:30.764401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:46.671 [2024-11-17 04:33:30.764408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.715 ms 00:27:46.671 [2024-11-17 04:33:30.764413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.764436] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:46.672 [2024-11-17 04:33:30.764447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:46.672 [2024-11-17 04:33:30.764454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:46.672 [2024-11-17 04:33:30.764460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:46.672 [2024-11-17 04:33:30.764467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:46.672 [2024-11-17 04:33:30.764473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:46.672 [2024-11-17 04:33:30.764479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:46.672 [2024-11-17 04:33:30.764484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:46.672 [2024-11-17 04:33:30.764490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:46.672 [2024-11-17 04:33:30.764496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:46.672 [2024-11-17 04:33:30.764502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:46.672 [2024-11-17 04:33:30.764507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:46.672 [2024-11-17 04:33:30.764513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:46.672 [2024-11-17 04:33:30.764519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:46.672 [2024-11-17 04:33:30.764533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:46.672 [2024-11-17 04:33:30.764539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:46.672 [2024-11-17 04:33:30.764544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:46.672 [2024-11-17 04:33:30.764550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:46.672 [2024-11-17 04:33:30.764556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:46.672 [2024-11-17 04:33:30.764563] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:46.672 [2024-11-17 04:33:30.764570] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d01a48cf-805b-4cfb-a942-9c3b9227f856 00:27:46.672 [2024-11-17 04:33:30.764576] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:46.672 [2024-11-17 04:33:30.764582] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:46.672 [2024-11-17 04:33:30.764587] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:46.672 [2024-11-17 04:33:30.764597] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:46.672 [2024-11-17 04:33:30.764603] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:46.672 [2024-11-17 04:33:30.764609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:46.672 [2024-11-17 04:33:30.764619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:46.672 [2024-11-17 04:33:30.764624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:46.672 [2024-11-17 04:33:30.764630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:46.672 [2024-11-17 04:33:30.764636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.672 [2024-11-17 04:33:30.764643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:46.672 [2024-11-17 04:33:30.764652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:27:46.672 [2024-11-17 04:33:30.764658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.765917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.672 [2024-11-17 04:33:30.765934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:46.672 [2024-11-17 04:33:30.765942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.247 ms 00:27:46.672 [2024-11-17 04:33:30.765948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.766013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.672 [2024-11-17 04:33:30.766019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:46.672 [2024-11-17 04:33:30.766025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:46.672 [2024-11-17 04:33:30.766031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.770446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.672 [2024-11-17 04:33:30.770559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:46.672 [2024-11-17 04:33:30.770572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.672 [2024-11-17 04:33:30.770578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.770599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.672 [2024-11-17 04:33:30.770606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:46.672 [2024-11-17 04:33:30.770612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.672 [2024-11-17 04:33:30.770618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.770675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.672 [2024-11-17 04:33:30.770686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:46.672 [2024-11-17 04:33:30.770692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.672 [2024-11-17 04:33:30.770698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.770709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.672 [2024-11-17 04:33:30.770715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:46.672 [2024-11-17 04:33:30.770723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.672 [2024-11-17 04:33:30.770729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.778759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.672 [2024-11-17 04:33:30.778796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:46.672 [2024-11-17 04:33:30.778803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.672 [2024-11-17 04:33:30.778809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.785184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.672 [2024-11-17 04:33:30.785218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:46.672 [2024-11-17 04:33:30.785225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.672 [2024-11-17 04:33:30.785231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.785269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.672 [2024-11-17 04:33:30.785276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:46.672 [2024-11-17 04:33:30.785286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.672 [2024-11-17 04:33:30.785291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.785327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.672 [2024-11-17 04:33:30.785334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:46.672 [2024-11-17 04:33:30.785340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.672 [2024-11-17 04:33:30.785346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.785411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.672 [2024-11-17 04:33:30.785419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:46.672 [2024-11-17 04:33:30.785428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.672 [2024-11-17 04:33:30.785436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.785459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.672 [2024-11-17 04:33:30.785468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:46.672 [2024-11-17 04:33:30.785474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.672 [2024-11-17 04:33:30.785480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.785511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.672 [2024-11-17 04:33:30.785518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:46.672 [2024-11-17 04:33:30.785524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.672 [2024-11-17 04:33:30.785531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.785566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.672 [2024-11-17 04:33:30.785574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:46.672 [2024-11-17 04:33:30.785579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.672 [2024-11-17 04:33:30.785586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.672 [2024-11-17 04:33:30.785676] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8520.877 ms, result 0 00:27:48.585 04:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:48.585 04:33:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:48.585 04:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:48.585 04:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:48.585 04:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:48.585 04:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92186 00:27:48.585 04:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:48.585 04:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92186 00:27:48.585 04:33:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 92186 ']' 00:27:48.585 04:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:48.585 04:33:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.585 04:33:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.585 04:33:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.585 04:33:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.585 04:33:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:48.585 [2024-11-17 04:33:34.206826] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:48.585 [2024-11-17 04:33:34.207142] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92186 ] 00:27:48.845 [2024-11-17 04:33:34.362568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.845 [2024-11-17 04:33:34.387499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.106 [2024-11-17 04:33:34.642429] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:49.106 [2024-11-17 04:33:34.642478] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:49.106 [2024-11-17 04:33:34.789256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.106 [2024-11-17 04:33:34.789306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:49.106 [2024-11-17 04:33:34.789322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:49.106 [2024-11-17 04:33:34.789330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.106 [2024-11-17 04:33:34.789406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.106 [2024-11-17 04:33:34.789417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:49.106 [2024-11-17 04:33:34.789428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:27:49.106 [2024-11-17 04:33:34.789436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.106 [2024-11-17 04:33:34.789461] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:49.106 [2024-11-17 04:33:34.789721] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:49.106 [2024-11-17 04:33:34.789740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.107 [2024-11-17 04:33:34.789748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:49.107 [2024-11-17 04:33:34.789757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.288 ms 00:27:49.107 [2024-11-17 04:33:34.789764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.107 [2024-11-17 04:33:34.790936] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:49.107 [2024-11-17 04:33:34.793538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.107 [2024-11-17 04:33:34.793586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:49.107 [2024-11-17 04:33:34.793596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.605 ms 00:27:49.107 [2024-11-17 04:33:34.793607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.107 [2024-11-17 04:33:34.793672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.107 [2024-11-17 04:33:34.793682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:49.107 [2024-11-17 04:33:34.793690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:49.107 [2024-11-17 04:33:34.793697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.107 [2024-11-17 04:33:34.799207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.107 [2024-11-17 04:33:34.799239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:49.107 [2024-11-17 04:33:34.799248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.449 ms 00:27:49.107 [2024-11-17 04:33:34.799260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.107 [2024-11-17 04:33:34.799307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.107 [2024-11-17 04:33:34.799316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:49.107 [2024-11-17 04:33:34.799324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:49.107 [2024-11-17 04:33:34.799331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.107 [2024-11-17 04:33:34.799370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.107 [2024-11-17 04:33:34.799402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:49.107 [2024-11-17 04:33:34.799411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:49.107 [2024-11-17 04:33:34.799418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.107 [2024-11-17 04:33:34.799439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:49.107 [2024-11-17 04:33:34.800922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.107 [2024-11-17 04:33:34.801048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:49.107 [2024-11-17 04:33:34.801064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.489 ms 00:27:49.107 [2024-11-17 04:33:34.801072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.107 [2024-11-17 04:33:34.801110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.107 [2024-11-17 04:33:34.801125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:49.107 [2024-11-17 04:33:34.801132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:49.107 [2024-11-17 04:33:34.801143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.107 [2024-11-17 04:33:34.801163] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:49.107 [2024-11-17 04:33:34.801184] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:49.107 [2024-11-17 04:33:34.801217] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:49.107 [2024-11-17 04:33:34.801235] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:49.107 [2024-11-17 04:33:34.801341] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:49.107 [2024-11-17 04:33:34.801357] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:49.107 [2024-11-17 04:33:34.801368] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:49.107 [2024-11-17 04:33:34.801391] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:49.107 [2024-11-17 04:33:34.801400] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:49.107 [2024-11-17 04:33:34.801407] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:49.107 [2024-11-17 04:33:34.801414] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:49.107 [2024-11-17 04:33:34.801421] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:49.107 [2024-11-17 04:33:34.801428] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:49.107 [2024-11-17 04:33:34.801436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.107 [2024-11-17 04:33:34.801443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:49.107 [2024-11-17 04:33:34.801453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.275 ms 00:27:49.107 [2024-11-17 04:33:34.801460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.107 [2024-11-17 04:33:34.801544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.107 [2024-11-17 04:33:34.801552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:49.107 [2024-11-17 04:33:34.801560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:27:49.107 [2024-11-17 04:33:34.801570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.107 [2024-11-17 04:33:34.801673] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:49.107 [2024-11-17 04:33:34.801684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:49.107 [2024-11-17 04:33:34.801693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:49.107 [2024-11-17 04:33:34.801709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.107 [2024-11-17 04:33:34.801718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:49.107 [2024-11-17 04:33:34.801726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:49.107 [2024-11-17 04:33:34.801734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:49.107 [2024-11-17 04:33:34.801741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:49.107 [2024-11-17 04:33:34.801750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:49.107 [2024-11-17 04:33:34.801757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.107 [2024-11-17 04:33:34.801765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:49.107 [2024-11-17 04:33:34.801773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:49.107 [2024-11-17 04:33:34.801780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.107 [2024-11-17 04:33:34.801787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:49.107 [2024-11-17 04:33:34.801795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:49.107 [2024-11-17 04:33:34.801807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.107 [2024-11-17 04:33:34.801819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:49.107 [2024-11-17 04:33:34.801826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:49.107 [2024-11-17 04:33:34.801833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.107 [2024-11-17 04:33:34.801841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:49.107 [2024-11-17 04:33:34.801849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:49.107 [2024-11-17 04:33:34.801856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:49.107 [2024-11-17 04:33:34.801865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:49.107 [2024-11-17 04:33:34.801873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:49.107 [2024-11-17 04:33:34.801881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:49.107 [2024-11-17 04:33:34.801888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:49.107 [2024-11-17 04:33:34.801895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:49.107 [2024-11-17 04:33:34.801903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:49.107 [2024-11-17 04:33:34.801910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:49.107 [2024-11-17 04:33:34.801918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:49.107 [2024-11-17 04:33:34.801925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:49.107 [2024-11-17 04:33:34.801933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:49.108 [2024-11-17 04:33:34.801942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:49.108 [2024-11-17 04:33:34.801950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.108 [2024-11-17 04:33:34.801957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:49.108 [2024-11-17 04:33:34.801965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:49.108 [2024-11-17 04:33:34.801972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.108 [2024-11-17 04:33:34.801980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:49.108 [2024-11-17 04:33:34.801987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:49.108 [2024-11-17 04:33:34.801994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.108 [2024-11-17 04:33:34.802000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:49.108 [2024-11-17 04:33:34.802007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:49.108 [2024-11-17 04:33:34.802012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.108 [2024-11-17 04:33:34.802019] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:49.108 [2024-11-17 04:33:34.802027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:49.108 [2024-11-17 04:33:34.802035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:49.108 [2024-11-17 04:33:34.802041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.108 [2024-11-17 04:33:34.802048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:49.108 [2024-11-17 04:33:34.802057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:49.108 [2024-11-17 04:33:34.802063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:49.108 [2024-11-17 04:33:34.802070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:49.108 [2024-11-17 04:33:34.802076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:49.108 [2024-11-17 04:33:34.802082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:49.108 [2024-11-17 04:33:34.802090] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:49.108 [2024-11-17 04:33:34.802101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:49.108 [2024-11-17 04:33:34.802109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:49.108 [2024-11-17 04:33:34.802116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:49.108 [2024-11-17 04:33:34.802124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:49.108 [2024-11-17 04:33:34.802131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:49.108 [2024-11-17 04:33:34.802138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:49.108 [2024-11-17 04:33:34.802145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:49.108 [2024-11-17 04:33:34.802151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:49.108 [2024-11-17 04:33:34.802158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:49.108 [2024-11-17 04:33:34.802165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:49.108 [2024-11-17 04:33:34.802174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:49.108 [2024-11-17 04:33:34.802181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:49.108 [2024-11-17 04:33:34.802187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:49.108 [2024-11-17 04:33:34.802194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:49.108 [2024-11-17 04:33:34.802201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:49.108 [2024-11-17 04:33:34.802208] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:49.108 [2024-11-17 04:33:34.802216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:49.108 [2024-11-17 04:33:34.802224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:49.108 [2024-11-17 04:33:34.802231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:49.108 [2024-11-17 04:33:34.802239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:49.108 [2024-11-17 04:33:34.802246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:49.108 [2024-11-17 04:33:34.802253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.108 [2024-11-17 04:33:34.802262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:49.108 [2024-11-17 04:33:34.802269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.652 ms 00:27:49.108 [2024-11-17 04:33:34.802276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.108 [2024-11-17 04:33:34.802317] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:49.108 [2024-11-17 04:33:34.802329] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:53.319 [2024-11-17 04:33:38.646859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.647169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:53.319 [2024-11-17 04:33:38.647245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3844.526 ms 00:27:53.319 [2024-11-17 04:33:38.647273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.659821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.660025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:53.319 [2024-11-17 04:33:38.660092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.405 ms 00:27:53.319 [2024-11-17 04:33:38.660119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.660211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.660352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:53.319 [2024-11-17 04:33:38.660407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:53.319 [2024-11-17 04:33:38.660734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.673239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.673464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:53.319 [2024-11-17 04:33:38.673683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.412 ms 00:27:53.319 [2024-11-17 04:33:38.673769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.673829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.673882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:53.319 [2024-11-17 04:33:38.673909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:53.319 [2024-11-17 04:33:38.673938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.674529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.674597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:53.319 [2024-11-17 04:33:38.674620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.520 ms 00:27:53.319 [2024-11-17 04:33:38.674640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.674785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.674814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:53.319 [2024-11-17 04:33:38.674836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:53.319 [2024-11-17 04:33:38.674858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.683221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.683388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:53.319 [2024-11-17 04:33:38.683455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.323 ms 00:27:53.319 [2024-11-17 04:33:38.683478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.687197] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:53.319 [2024-11-17 04:33:38.687371] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:53.319 [2024-11-17 04:33:38.687450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.687499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:53.319 [2024-11-17 04:33:38.687523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.823 ms 00:27:53.319 [2024-11-17 04:33:38.687606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.692297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.692478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:53.319 [2024-11-17 04:33:38.692559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.644 ms 00:27:53.319 [2024-11-17 04:33:38.692584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.694874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.695018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:53.319 [2024-11-17 04:33:38.695078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.229 ms 00:27:53.319 [2024-11-17 04:33:38.695101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.697971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.698123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:53.319 [2024-11-17 04:33:38.698176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.547 ms 00:27:53.319 [2024-11-17 04:33:38.698198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.698562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.698604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:53.319 [2024-11-17 04:33:38.698805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:27:53.319 [2024-11-17 04:33:38.698826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.733125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.733345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:53.319 [2024-11-17 04:33:38.733368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.266 ms 00:27:53.319 [2024-11-17 04:33:38.733401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.741588] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:53.319 [2024-11-17 04:33:38.742552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.742596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:53.319 [2024-11-17 04:33:38.742608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.099 ms 00:27:53.319 [2024-11-17 04:33:38.742616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.742684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.742695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:53.319 [2024-11-17 04:33:38.742705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:53.319 [2024-11-17 04:33:38.742714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.742779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.742790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:53.319 [2024-11-17 04:33:38.742803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:53.319 [2024-11-17 04:33:38.742811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.742840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.742849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:53.319 [2024-11-17 04:33:38.742862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:53.319 [2024-11-17 04:33:38.742870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.319 [2024-11-17 04:33:38.742906] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:53.319 [2024-11-17 04:33:38.742927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.319 [2024-11-17 04:33:38.742935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:53.319 [2024-11-17 04:33:38.742943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:53.320 [2024-11-17 04:33:38.742955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.320 [2024-11-17 04:33:38.747811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.320 [2024-11-17 04:33:38.747965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:53.320 [2024-11-17 04:33:38.747984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.835 ms 00:27:53.320 [2024-11-17 04:33:38.747992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.320 [2024-11-17 04:33:38.748079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.320 [2024-11-17 04:33:38.748090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:53.320 [2024-11-17 04:33:38.748104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:27:53.320 [2024-11-17 04:33:38.748112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.320 [2024-11-17 04:33:38.749636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3959.897 ms, result 0 00:27:53.320 [2024-11-17 04:33:38.762988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.320 [2024-11-17 04:33:38.778982] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:53.320 [2024-11-17 04:33:38.787105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:53.320 04:33:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:53.320 04:33:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:53.320 04:33:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:53.320 04:33:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:53.320 04:33:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:53.320 [2024-11-17 04:33:39.031175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.320 [2024-11-17 04:33:39.031233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:53.320 [2024-11-17 04:33:39.031248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:53.320 [2024-11-17 04:33:39.031256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.320 [2024-11-17 04:33:39.031280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.320 [2024-11-17 04:33:39.031290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:53.320 [2024-11-17 04:33:39.031299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:53.320 [2024-11-17 04:33:39.031313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.320 [2024-11-17 04:33:39.031334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.320 [2024-11-17 04:33:39.031343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:53.320 [2024-11-17 04:33:39.031351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:53.320 [2024-11-17 04:33:39.031359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.320 [2024-11-17 04:33:39.031439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.253 ms, result 0 00:27:53.320 true 00:27:53.581 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:53.581 { 00:27:53.581 "name": "ftl", 00:27:53.581 "properties": [ 00:27:53.581 { 00:27:53.581 "name": "superblock_version", 00:27:53.581 "value": 5, 00:27:53.581 "read-only": true 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "name": "base_device", 00:27:53.581 "bands": [ 00:27:53.581 { 00:27:53.581 "id": 0, 00:27:53.581 "state": "CLOSED", 00:27:53.581 "validity": 1.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 1, 00:27:53.581 "state": "CLOSED", 00:27:53.581 "validity": 1.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 2, 00:27:53.581 "state": "CLOSED", 00:27:53.581 "validity": 0.007843137254901933 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 3, 00:27:53.581 "state": "FREE", 00:27:53.581 "validity": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 4, 00:27:53.581 "state": "FREE", 00:27:53.581 "validity": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 5, 00:27:53.581 "state": "FREE", 00:27:53.581 "validity": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 6, 00:27:53.581 "state": "FREE", 00:27:53.581 "validity": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 7, 00:27:53.581 "state": "FREE", 00:27:53.581 "validity": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 8, 00:27:53.581 "state": "FREE", 00:27:53.581 "validity": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 9, 00:27:53.581 "state": "FREE", 00:27:53.581 "validity": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 10, 00:27:53.581 "state": "FREE", 00:27:53.581 "validity": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 11, 00:27:53.581 "state": "FREE", 00:27:53.581 "validity": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 12, 00:27:53.581 "state": "FREE", 00:27:53.581 "validity": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 13, 00:27:53.581 "state": "FREE", 00:27:53.581 "validity": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 14, 00:27:53.581 "state": "FREE", 00:27:53.581 "validity": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 15, 00:27:53.581 "state": "FREE", 00:27:53.581 "validity": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 16, 00:27:53.581 "state": "FREE", 00:27:53.581 "validity": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 17, 00:27:53.581 "state": "FREE", 00:27:53.581 "validity": 0.0 00:27:53.581 } 00:27:53.581 ], 00:27:53.581 "read-only": true 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "name": "cache_device", 00:27:53.581 "type": "bdev", 00:27:53.581 "chunks": [ 00:27:53.581 { 00:27:53.581 "id": 0, 00:27:53.581 "state": "INACTIVE", 00:27:53.581 "utilization": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 1, 00:27:53.581 "state": "OPEN", 00:27:53.581 "utilization": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 2, 00:27:53.581 "state": "OPEN", 00:27:53.581 "utilization": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 3, 00:27:53.581 "state": "FREE", 00:27:53.581 "utilization": 0.0 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "id": 4, 00:27:53.581 "state": "FREE", 00:27:53.581 "utilization": 0.0 00:27:53.581 } 00:27:53.581 ], 00:27:53.581 "read-only": true 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "name": "verbose_mode", 00:27:53.581 "value": true, 00:27:53.581 "unit": "", 00:27:53.581 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:53.581 }, 00:27:53.581 { 00:27:53.581 "name": "prep_upgrade_on_shutdown", 00:27:53.581 "value": false, 00:27:53.581 "unit": "", 00:27:53.581 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:53.581 } 00:27:53.581 ] 00:27:53.581 } 00:27:53.581 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:53.581 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:53.581 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:53.841 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:53.841 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:53.841 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:53.841 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:53.841 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:54.101 Validate MD5 checksum, iteration 1 00:27:54.101 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:54.101 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:54.101 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:54.101 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:54.101 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:54.101 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:54.101 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:54.101 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:54.101 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:54.101 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:54.101 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:54.101 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:54.101 04:33:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:54.101 [2024-11-17 04:33:39.799696] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:54.101 [2024-11-17 04:33:39.800547] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92255 ] 00:27:54.361 [2024-11-17 04:33:39.961642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.361 [2024-11-17 04:33:39.991793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.749  [2024-11-17T04:33:42.422Z] Copying: 637/1024 [MB] (637 MBps) [2024-11-17T04:33:42.993Z] Copying: 1024/1024 [MB] (average 582 MBps) 00:27:57.266 00:27:57.266 04:33:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:57.266 04:33:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:59.900 04:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:59.900 Validate MD5 checksum, iteration 2 00:27:59.900 04:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=bef38d87480ef5da75960611f34186c6 00:27:59.900 04:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ bef38d87480ef5da75960611f34186c6 != \b\e\f\3\8\d\8\7\4\8\0\e\f\5\d\a\7\5\9\6\0\6\1\1\f\3\4\1\8\6\c\6 ]] 00:27:59.900 04:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:59.900 04:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:59.900 04:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:59.901 04:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:59.901 04:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:59.901 04:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:59.901 04:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:59.901 04:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:59.901 04:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:59.901 [2024-11-17 04:33:45.208152] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:59.901 [2024-11-17 04:33:45.208398] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92320 ] 00:27:59.901 [2024-11-17 04:33:45.364000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.901 [2024-11-17 04:33:45.388393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.283  [2024-11-17T04:33:47.581Z] Copying: 631/1024 [MB] (631 MBps) [2024-11-17T04:33:50.873Z] Copying: 1024/1024 [MB] (average 584 MBps) 00:28:05.146 00:28:05.146 04:33:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:05.146 04:33:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2ce031c653bffb387e210372449cacff 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2ce031c653bffb387e210372449cacff != \2\c\e\0\3\1\c\6\5\3\b\f\f\b\3\8\7\e\2\1\0\3\7\2\4\4\9\c\a\c\f\f ]] 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92186 ]] 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92186 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92404 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:07.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92404 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 92404 ']' 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:07.047 04:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:07.306 [2024-11-17 04:33:52.773030] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:07.306 [2024-11-17 04:33:52.773139] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92404 ] 00:28:07.306 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 92186 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:07.306 [2024-11-17 04:33:52.926586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.306 [2024-11-17 04:33:52.943952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.564 [2024-11-17 04:33:53.195260] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:07.564 [2024-11-17 04:33:53.195312] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:07.824 [2024-11-17 04:33:53.332832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.824 [2024-11-17 04:33:53.333004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:07.824 [2024-11-17 04:33:53.333024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:07.824 [2024-11-17 04:33:53.333032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.824 [2024-11-17 04:33:53.333078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.824 [2024-11-17 04:33:53.333086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:07.824 [2024-11-17 04:33:53.333094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:07.824 [2024-11-17 04:33:53.333100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.824 [2024-11-17 04:33:53.333119] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:07.824 [2024-11-17 04:33:53.333290] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:07.824 [2024-11-17 04:33:53.333306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.824 [2024-11-17 04:33:53.333312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:07.824 [2024-11-17 04:33:53.333319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:28:07.824 [2024-11-17 04:33:53.333327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.824 [2024-11-17 04:33:53.333538] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:07.824 [2024-11-17 04:33:53.336606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.824 [2024-11-17 04:33:53.336637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:07.824 [2024-11-17 04:33:53.336648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.068 ms 00:28:07.824 [2024-11-17 04:33:53.336654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.824 [2024-11-17 04:33:53.337400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.824 [2024-11-17 04:33:53.337424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:07.824 [2024-11-17 04:33:53.337435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:07.824 [2024-11-17 04:33:53.337442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.824 [2024-11-17 04:33:53.337650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.824 [2024-11-17 04:33:53.337661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:07.824 [2024-11-17 04:33:53.337669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.162 ms 00:28:07.824 [2024-11-17 04:33:53.337675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.824 [2024-11-17 04:33:53.337705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.824 [2024-11-17 04:33:53.337712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:07.824 [2024-11-17 04:33:53.337720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:07.824 [2024-11-17 04:33:53.337725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.824 [2024-11-17 04:33:53.337746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.824 [2024-11-17 04:33:53.337753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:07.824 [2024-11-17 04:33:53.337760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:07.824 [2024-11-17 04:33:53.337767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.824 [2024-11-17 04:33:53.337782] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:07.824 [2024-11-17 04:33:53.338499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.824 [2024-11-17 04:33:53.338519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:07.824 [2024-11-17 04:33:53.338526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.721 ms 00:28:07.824 [2024-11-17 04:33:53.338532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.824 [2024-11-17 04:33:53.338549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.824 [2024-11-17 04:33:53.338557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:07.824 [2024-11-17 04:33:53.338563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:07.824 [2024-11-17 04:33:53.338568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.824 [2024-11-17 04:33:53.338584] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:07.824 [2024-11-17 04:33:53.338599] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:07.824 [2024-11-17 04:33:53.338624] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:07.824 [2024-11-17 04:33:53.338638] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:07.824 [2024-11-17 04:33:53.338719] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:07.824 [2024-11-17 04:33:53.338729] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:07.824 [2024-11-17 04:33:53.338740] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:07.824 [2024-11-17 04:33:53.338747] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:07.824 [2024-11-17 04:33:53.338753] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:07.824 [2024-11-17 04:33:53.338762] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:07.824 [2024-11-17 04:33:53.338767] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:07.824 [2024-11-17 04:33:53.338773] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:07.824 [2024-11-17 04:33:53.338778] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:07.824 [2024-11-17 04:33:53.338784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.824 [2024-11-17 04:33:53.338789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:07.824 [2024-11-17 04:33:53.338796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:28:07.824 [2024-11-17 04:33:53.338802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.824 [2024-11-17 04:33:53.338866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.824 [2024-11-17 04:33:53.338872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:07.824 [2024-11-17 04:33:53.338877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:07.824 [2024-11-17 04:33:53.338884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.824 [2024-11-17 04:33:53.338960] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:07.824 [2024-11-17 04:33:53.338967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:07.824 [2024-11-17 04:33:53.338973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:07.824 [2024-11-17 04:33:53.338980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.824 [2024-11-17 04:33:53.338987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:07.824 [2024-11-17 04:33:53.338992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:07.824 [2024-11-17 04:33:53.338998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:07.824 [2024-11-17 04:33:53.339003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:07.824 [2024-11-17 04:33:53.339009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:07.825 [2024-11-17 04:33:53.339014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.825 [2024-11-17 04:33:53.339019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:07.825 [2024-11-17 04:33:53.339025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:07.825 [2024-11-17 04:33:53.339030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.825 [2024-11-17 04:33:53.339039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:07.825 [2024-11-17 04:33:53.339044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:07.825 [2024-11-17 04:33:53.339053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.825 [2024-11-17 04:33:53.339058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:07.825 [2024-11-17 04:33:53.339063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:07.825 [2024-11-17 04:33:53.339068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.825 [2024-11-17 04:33:53.339073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:07.825 [2024-11-17 04:33:53.339078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:07.825 [2024-11-17 04:33:53.339083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:07.825 [2024-11-17 04:33:53.339088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:07.825 [2024-11-17 04:33:53.339093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:07.825 [2024-11-17 04:33:53.339098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:07.825 [2024-11-17 04:33:53.339102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:07.825 [2024-11-17 04:33:53.339107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:07.825 [2024-11-17 04:33:53.339112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:07.825 [2024-11-17 04:33:53.339117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:07.825 [2024-11-17 04:33:53.339122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:07.825 [2024-11-17 04:33:53.339126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:07.825 [2024-11-17 04:33:53.339133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:07.825 [2024-11-17 04:33:53.339138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:07.825 [2024-11-17 04:33:53.339142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.825 [2024-11-17 04:33:53.339147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:07.825 [2024-11-17 04:33:53.339153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:07.825 [2024-11-17 04:33:53.339159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.825 [2024-11-17 04:33:53.339165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:07.825 [2024-11-17 04:33:53.339171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:07.825 [2024-11-17 04:33:53.339177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.825 [2024-11-17 04:33:53.339183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:07.825 [2024-11-17 04:33:53.339188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:07.825 [2024-11-17 04:33:53.339194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.825 [2024-11-17 04:33:53.339199] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:07.825 [2024-11-17 04:33:53.339207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:07.825 [2024-11-17 04:33:53.339213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:07.825 [2024-11-17 04:33:53.339221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.825 [2024-11-17 04:33:53.339229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:07.825 [2024-11-17 04:33:53.339235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:07.825 [2024-11-17 04:33:53.339241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:07.825 [2024-11-17 04:33:53.339246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:07.825 [2024-11-17 04:33:53.339252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:07.825 [2024-11-17 04:33:53.339258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:07.825 [2024-11-17 04:33:53.339265] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:07.825 [2024-11-17 04:33:53.339272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.825 [2024-11-17 04:33:53.339279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:07.825 [2024-11-17 04:33:53.339285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:07.825 [2024-11-17 04:33:53.339291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:07.825 [2024-11-17 04:33:53.339297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:07.825 [2024-11-17 04:33:53.339303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:07.825 [2024-11-17 04:33:53.339310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:07.825 [2024-11-17 04:33:53.339316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:07.825 [2024-11-17 04:33:53.339321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:07.825 [2024-11-17 04:33:53.339329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:07.825 [2024-11-17 04:33:53.339335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:07.825 [2024-11-17 04:33:53.339341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:07.825 [2024-11-17 04:33:53.339347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:07.825 [2024-11-17 04:33:53.339353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:07.825 [2024-11-17 04:33:53.339361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:07.825 [2024-11-17 04:33:53.339367] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:07.825 [2024-11-17 04:33:53.339389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.825 [2024-11-17 04:33:53.339396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:07.825 [2024-11-17 04:33:53.339403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:07.825 [2024-11-17 04:33:53.339410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:07.825 [2024-11-17 04:33:53.339416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:07.825 [2024-11-17 04:33:53.339423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.825 [2024-11-17 04:33:53.339429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:07.825 [2024-11-17 04:33:53.339439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:28:07.825 [2024-11-17 04:33:53.339448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.825 [2024-11-17 04:33:53.345314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.825 [2024-11-17 04:33:53.345332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:07.825 [2024-11-17 04:33:53.345339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.829 ms 00:28:07.825 [2024-11-17 04:33:53.345344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.825 [2024-11-17 04:33:53.345370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.825 [2024-11-17 04:33:53.345394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:07.825 [2024-11-17 04:33:53.345401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:07.825 [2024-11-17 04:33:53.345408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.825 [2024-11-17 04:33:53.352793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.825 [2024-11-17 04:33:53.352824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:07.825 [2024-11-17 04:33:53.352831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.348 ms 00:28:07.825 [2024-11-17 04:33:53.352837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.825 [2024-11-17 04:33:53.352859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.825 [2024-11-17 04:33:53.352865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:07.825 [2024-11-17 04:33:53.352874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:07.825 [2024-11-17 04:33:53.352879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.825 [2024-11-17 04:33:53.352934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.825 [2024-11-17 04:33:53.352941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:07.825 [2024-11-17 04:33:53.352948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:07.825 [2024-11-17 04:33:53.352955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.825 [2024-11-17 04:33:53.352984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.825 [2024-11-17 04:33:53.352991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:07.825 [2024-11-17 04:33:53.352997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:07.825 [2024-11-17 04:33:53.353129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.825 [2024-11-17 04:33:53.357826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.825 [2024-11-17 04:33:53.357854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:07.825 [2024-11-17 04:33:53.357860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.678 ms 00:28:07.825 [2024-11-17 04:33:53.357870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.825 [2024-11-17 04:33:53.357946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.825 [2024-11-17 04:33:53.357955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:07.826 [2024-11-17 04:33:53.357961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:07.826 [2024-11-17 04:33:53.357969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.826 [2024-11-17 04:33:53.372638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.826 [2024-11-17 04:33:53.372691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:07.826 [2024-11-17 04:33:53.372708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.653 ms 00:28:07.826 [2024-11-17 04:33:53.372718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.826 [2024-11-17 04:33:53.374168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.826 [2024-11-17 04:33:53.374322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:07.826 [2024-11-17 04:33:53.374344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.356 ms 00:28:07.826 [2024-11-17 04:33:53.374360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.826 [2024-11-17 04:33:53.390164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.826 [2024-11-17 04:33:53.390204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:07.826 [2024-11-17 04:33:53.390216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.751 ms 00:28:07.826 [2024-11-17 04:33:53.390226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.826 [2024-11-17 04:33:53.390336] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:07.826 [2024-11-17 04:33:53.390431] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:07.826 [2024-11-17 04:33:53.390508] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:07.826 [2024-11-17 04:33:53.390585] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:07.826 [2024-11-17 04:33:53.390594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.826 [2024-11-17 04:33:53.390602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:07.826 [2024-11-17 04:33:53.390611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.331 ms 00:28:07.826 [2024-11-17 04:33:53.390621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.826 [2024-11-17 04:33:53.390654] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:07.826 [2024-11-17 04:33:53.390664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.826 [2024-11-17 04:33:53.390671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:07.826 [2024-11-17 04:33:53.390679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:07.826 [2024-11-17 04:33:53.390686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.826 [2024-11-17 04:33:53.393319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.826 [2024-11-17 04:33:53.393439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:07.826 [2024-11-17 04:33:53.393452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.615 ms 00:28:07.826 [2024-11-17 04:33:53.393462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.826 [2024-11-17 04:33:53.393959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.826 [2024-11-17 04:33:53.393980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:07.826 [2024-11-17 04:33:53.393987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:07.826 [2024-11-17 04:33:53.393993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.826 [2024-11-17 04:33:53.394049] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:07.826 [2024-11-17 04:33:53.394173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.826 [2024-11-17 04:33:53.394181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:07.826 [2024-11-17 04:33:53.394187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.126 ms 00:28:07.826 [2024-11-17 04:33:53.394195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.397 [2024-11-17 04:33:53.928113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.397 [2024-11-17 04:33:53.928175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:08.397 [2024-11-17 04:33:53.928191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 533.665 ms 00:28:08.397 [2024-11-17 04:33:53.928204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.397 [2024-11-17 04:33:53.929660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.397 [2024-11-17 04:33:53.929694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:08.397 [2024-11-17 04:33:53.929711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.083 ms 00:28:08.397 [2024-11-17 04:33:53.929719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.397 [2024-11-17 04:33:53.930238] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:08.397 [2024-11-17 04:33:53.930266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.397 [2024-11-17 04:33:53.930275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:08.397 [2024-11-17 04:33:53.930284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.521 ms 00:28:08.397 [2024-11-17 04:33:53.930292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.397 [2024-11-17 04:33:53.930329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.397 [2024-11-17 04:33:53.930341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:08.397 [2024-11-17 04:33:53.930356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:08.397 [2024-11-17 04:33:53.930363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.397 [2024-11-17 04:33:53.930410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 536.358 ms, result 0 00:28:08.397 [2024-11-17 04:33:53.930454] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:08.397 [2024-11-17 04:33:53.930557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.397 [2024-11-17 04:33:53.930566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:08.397 [2024-11-17 04:33:53.930574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.104 ms 00:28:08.397 [2024-11-17 04:33:53.930580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.970 [2024-11-17 04:33:54.535769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.970 [2024-11-17 04:33:54.535938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:08.970 [2024-11-17 04:33:54.535958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 604.817 ms 00:28:08.970 [2024-11-17 04:33:54.535966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.970 [2024-11-17 04:33:54.537406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.970 [2024-11-17 04:33:54.537437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:08.970 [2024-11-17 04:33:54.537447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.046 ms 00:28:08.970 [2024-11-17 04:33:54.537454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.970 [2024-11-17 04:33:54.537784] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:08.970 [2024-11-17 04:33:54.537811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.970 [2024-11-17 04:33:54.537819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:08.970 [2024-11-17 04:33:54.537827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.331 ms 00:28:08.970 [2024-11-17 04:33:54.537834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.970 [2024-11-17 04:33:54.537861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.970 [2024-11-17 04:33:54.537869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:08.970 [2024-11-17 04:33:54.537877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:08.970 [2024-11-17 04:33:54.537883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.970 [2024-11-17 04:33:54.537917] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 607.460 ms, result 0 00:28:08.970 [2024-11-17 04:33:54.537961] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:08.970 [2024-11-17 04:33:54.537971] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:08.970 [2024-11-17 04:33:54.537980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.970 [2024-11-17 04:33:54.537988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:08.970 [2024-11-17 04:33:54.537996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1143.947 ms 00:28:08.970 [2024-11-17 04:33:54.538006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.970 [2024-11-17 04:33:54.538034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.970 [2024-11-17 04:33:54.538045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:08.970 [2024-11-17 04:33:54.538053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:08.970 [2024-11-17 04:33:54.538060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.970 [2024-11-17 04:33:54.545693] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:08.970 [2024-11-17 04:33:54.545791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.970 [2024-11-17 04:33:54.545801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:08.970 [2024-11-17 04:33:54.545812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.716 ms 00:28:08.970 [2024-11-17 04:33:54.545820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.970 [2024-11-17 04:33:54.546478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.970 [2024-11-17 04:33:54.546591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:08.970 [2024-11-17 04:33:54.546604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.610 ms 00:28:08.970 [2024-11-17 04:33:54.546612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.970 [2024-11-17 04:33:54.548842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.970 [2024-11-17 04:33:54.548866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:08.970 [2024-11-17 04:33:54.548875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.211 ms 00:28:08.970 [2024-11-17 04:33:54.548883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.970 [2024-11-17 04:33:54.548940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.970 [2024-11-17 04:33:54.548949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:08.970 [2024-11-17 04:33:54.548957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:08.970 [2024-11-17 04:33:54.548965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.970 [2024-11-17 04:33:54.549064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.970 [2024-11-17 04:33:54.549074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:08.970 [2024-11-17 04:33:54.549084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:08.970 [2024-11-17 04:33:54.549091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.970 [2024-11-17 04:33:54.549111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.970 [2024-11-17 04:33:54.549119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:08.970 [2024-11-17 04:33:54.549126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:08.970 [2024-11-17 04:33:54.549133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.970 [2024-11-17 04:33:54.549165] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:08.970 [2024-11-17 04:33:54.549174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.970 [2024-11-17 04:33:54.549181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:08.970 [2024-11-17 04:33:54.549188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:08.970 [2024-11-17 04:33:54.549197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.970 [2024-11-17 04:33:54.549245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.970 [2024-11-17 04:33:54.549253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:08.970 [2024-11-17 04:33:54.549260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:08.970 [2024-11-17 04:33:54.549267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.970 [2024-11-17 04:33:54.550062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1216.852 ms, result 0 00:28:08.970 [2024-11-17 04:33:54.562420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.970 [2024-11-17 04:33:54.578421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:08.970 [2024-11-17 04:33:54.586524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:09.917 Validate MD5 checksum, iteration 1 00:28:09.917 04:33:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:09.917 04:33:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:09.917 04:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:09.917 04:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:09.917 04:33:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:09.917 04:33:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:09.917 04:33:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:09.917 04:33:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:09.917 04:33:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:09.917 04:33:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:09.917 04:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:09.918 04:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:09.918 04:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:09.918 04:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:09.918 04:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:09.918 [2024-11-17 04:33:55.344856] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:09.918 [2024-11-17 04:33:55.345254] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92438 ] 00:28:09.918 [2024-11-17 04:33:55.504151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.918 [2024-11-17 04:33:55.522360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.300  [2024-11-17T04:33:57.597Z] Copying: 709/1024 [MB] (709 MBps) [2024-11-17T04:33:58.170Z] Copying: 1024/1024 [MB] (average 697 MBps) 00:28:12.443 00:28:12.443 04:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:12.443 04:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:14.993 04:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:14.993 Validate MD5 checksum, iteration 2 00:28:14.993 04:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=bef38d87480ef5da75960611f34186c6 00:28:14.993 04:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ bef38d87480ef5da75960611f34186c6 != \b\e\f\3\8\d\8\7\4\8\0\e\f\5\d\a\7\5\9\6\0\6\1\1\f\3\4\1\8\6\c\6 ]] 00:28:14.993 04:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:14.993 04:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:14.993 04:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:14.993 04:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:14.993 04:34:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:14.993 04:34:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:14.993 04:34:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:14.993 04:34:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:14.993 04:34:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:14.993 [2024-11-17 04:34:00.326018] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:14.993 [2024-11-17 04:34:00.326482] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92495 ] 00:28:14.993 [2024-11-17 04:34:00.486464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.993 [2024-11-17 04:34:00.514993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.376  [2024-11-17T04:34:02.675Z] Copying: 669/1024 [MB] (669 MBps) [2024-11-17T04:34:03.246Z] Copying: 1024/1024 [MB] (average 633 MBps) 00:28:17.519 00:28:17.519 04:34:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:17.519 04:34:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:19.426 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:19.426 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2ce031c653bffb387e210372449cacff 00:28:19.426 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2ce031c653bffb387e210372449cacff != \2\c\e\0\3\1\c\6\5\3\b\f\f\b\3\8\7\e\2\1\0\3\7\2\4\4\9\c\a\c\f\f ]] 00:28:19.426 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:19.426 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:19.427 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:19.427 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:19.427 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:19.427 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92404 ]] 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92404 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 92404 ']' 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 92404 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92404 00:28:19.686 killing process with pid 92404 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92404' 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 92404 00:28:19.686 04:34:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 92404 00:28:19.686 [2024-11-17 04:34:05.319523] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:19.686 [2024-11-17 04:34:05.322684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.686 [2024-11-17 04:34:05.322717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:19.686 [2024-11-17 04:34:05.322727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:19.686 [2024-11-17 04:34:05.322734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.686 [2024-11-17 04:34:05.322750] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:19.686 [2024-11-17 04:34:05.323135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.686 [2024-11-17 04:34:05.323154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:19.686 [2024-11-17 04:34:05.323168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.374 ms 00:28:19.687 [2024-11-17 04:34:05.323174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.323363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.687 [2024-11-17 04:34:05.323385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:19.687 [2024-11-17 04:34:05.323392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.173 ms 00:28:19.687 [2024-11-17 04:34:05.323398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.324399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.687 [2024-11-17 04:34:05.324418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:19.687 [2024-11-17 04:34:05.324426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.988 ms 00:28:19.687 [2024-11-17 04:34:05.324437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.325276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.687 [2024-11-17 04:34:05.325298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:19.687 [2024-11-17 04:34:05.325305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.815 ms 00:28:19.687 [2024-11-17 04:34:05.325311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.326717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.687 [2024-11-17 04:34:05.326745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:19.687 [2024-11-17 04:34:05.326753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.380 ms 00:28:19.687 [2024-11-17 04:34:05.326762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.328002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.687 [2024-11-17 04:34:05.328032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:19.687 [2024-11-17 04:34:05.328040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.212 ms 00:28:19.687 [2024-11-17 04:34:05.328045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.328104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.687 [2024-11-17 04:34:05.328111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:19.687 [2024-11-17 04:34:05.328117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:19.687 [2024-11-17 04:34:05.328122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.329207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.687 [2024-11-17 04:34:05.329235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:19.687 [2024-11-17 04:34:05.329242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.069 ms 00:28:19.687 [2024-11-17 04:34:05.329247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.330397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.687 [2024-11-17 04:34:05.330421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:19.687 [2024-11-17 04:34:05.330428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.123 ms 00:28:19.687 [2024-11-17 04:34:05.330433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.331497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.687 [2024-11-17 04:34:05.331523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:19.687 [2024-11-17 04:34:05.331530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.039 ms 00:28:19.687 [2024-11-17 04:34:05.331535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.332587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.687 [2024-11-17 04:34:05.332615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:19.687 [2024-11-17 04:34:05.332622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.995 ms 00:28:19.687 [2024-11-17 04:34:05.332627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.332652] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:19.687 [2024-11-17 04:34:05.332663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:19.687 [2024-11-17 04:34:05.332672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:19.687 [2024-11-17 04:34:05.332678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:19.687 [2024-11-17 04:34:05.332685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:19.687 [2024-11-17 04:34:05.332691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:19.687 [2024-11-17 04:34:05.332698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:19.687 [2024-11-17 04:34:05.332703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:19.687 [2024-11-17 04:34:05.332709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:19.687 [2024-11-17 04:34:05.332715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:19.687 [2024-11-17 04:34:05.332721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:19.687 [2024-11-17 04:34:05.332727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:19.687 [2024-11-17 04:34:05.332733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:19.687 [2024-11-17 04:34:05.332739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:19.687 [2024-11-17 04:34:05.332744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:19.687 [2024-11-17 04:34:05.332750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:19.687 [2024-11-17 04:34:05.332756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:19.687 [2024-11-17 04:34:05.332762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:19.687 [2024-11-17 04:34:05.332767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:19.687 [2024-11-17 04:34:05.332774] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:19.687 [2024-11-17 04:34:05.332780] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d01a48cf-805b-4cfb-a942-9c3b9227f856 00:28:19.687 [2024-11-17 04:34:05.332786] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:19.687 [2024-11-17 04:34:05.332792] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:19.687 [2024-11-17 04:34:05.332797] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:19.687 [2024-11-17 04:34:05.332803] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:19.687 [2024-11-17 04:34:05.332808] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:19.687 [2024-11-17 04:34:05.332813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:19.687 [2024-11-17 04:34:05.332819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:19.687 [2024-11-17 04:34:05.332824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:19.687 [2024-11-17 04:34:05.332828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:19.687 [2024-11-17 04:34:05.332835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.687 [2024-11-17 04:34:05.332844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:19.687 [2024-11-17 04:34:05.332850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:28:19.687 [2024-11-17 04:34:05.332856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.334089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.687 [2024-11-17 04:34:05.334186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:19.687 [2024-11-17 04:34:05.334198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.220 ms 00:28:19.687 [2024-11-17 04:34:05.334204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.334273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.687 [2024-11-17 04:34:05.334280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:19.687 [2024-11-17 04:34:05.334286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:28:19.687 [2024-11-17 04:34:05.334292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.338818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.687 [2024-11-17 04:34:05.338845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:19.687 [2024-11-17 04:34:05.338853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.687 [2024-11-17 04:34:05.338859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.338884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.687 [2024-11-17 04:34:05.338890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:19.687 [2024-11-17 04:34:05.338896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.687 [2024-11-17 04:34:05.338902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.338939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.687 [2024-11-17 04:34:05.338946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:19.687 [2024-11-17 04:34:05.338953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.687 [2024-11-17 04:34:05.338958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.338979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.687 [2024-11-17 04:34:05.338987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:19.687 [2024-11-17 04:34:05.338993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.687 [2024-11-17 04:34:05.338998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.347263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.687 [2024-11-17 04:34:05.347297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:19.687 [2024-11-17 04:34:05.347305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.687 [2024-11-17 04:34:05.347311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.353490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.687 [2024-11-17 04:34:05.353518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:19.687 [2024-11-17 04:34:05.353526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.687 [2024-11-17 04:34:05.353532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.687 [2024-11-17 04:34:05.353582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.688 [2024-11-17 04:34:05.353590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:19.688 [2024-11-17 04:34:05.353596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.688 [2024-11-17 04:34:05.353603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.688 [2024-11-17 04:34:05.353633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.688 [2024-11-17 04:34:05.353644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:19.688 [2024-11-17 04:34:05.353652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.688 [2024-11-17 04:34:05.353658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.688 [2024-11-17 04:34:05.353709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.688 [2024-11-17 04:34:05.353716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:19.688 [2024-11-17 04:34:05.353722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.688 [2024-11-17 04:34:05.353728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.688 [2024-11-17 04:34:05.353752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.688 [2024-11-17 04:34:05.353759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:19.688 [2024-11-17 04:34:05.353765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.688 [2024-11-17 04:34:05.353781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.688 [2024-11-17 04:34:05.353810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.688 [2024-11-17 04:34:05.353821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:19.688 [2024-11-17 04:34:05.353827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.688 [2024-11-17 04:34:05.353833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.688 [2024-11-17 04:34:05.353867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.688 [2024-11-17 04:34:05.353877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:19.688 [2024-11-17 04:34:05.353886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.688 [2024-11-17 04:34:05.353892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.688 [2024-11-17 04:34:05.353990] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 31.281 ms, result 0 00:28:19.948 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:19.948 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:19.948 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:19.948 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:19.948 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:19.948 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:19.949 Remove shared memory files 00:28:19.949 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:19.949 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:19.949 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:19.949 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:19.949 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92186 00:28:19.949 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:19.949 04:34:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:19.949 00:28:19.949 real 1m13.543s 00:28:19.949 user 1m38.370s 00:28:19.949 sys 0m20.291s 00:28:19.949 04:34:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:19.949 ************************************ 00:28:19.949 END TEST ftl_upgrade_shutdown 00:28:19.949 ************************************ 00:28:19.949 04:34:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:19.949 04:34:05 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:19.949 04:34:05 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:19.949 04:34:05 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:28:19.949 04:34:05 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.949 04:34:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:19.949 ************************************ 00:28:19.949 START TEST ftl_restore_fast 00:28:19.949 ************************************ 00:28:19.949 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:19.949 * Looking for test storage... 00:28:19.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:19.949 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:19.949 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:28:19.949 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:20.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.210 --rc genhtml_branch_coverage=1 00:28:20.210 --rc genhtml_function_coverage=1 00:28:20.210 --rc genhtml_legend=1 00:28:20.210 --rc geninfo_all_blocks=1 00:28:20.210 --rc geninfo_unexecuted_blocks=1 00:28:20.210 00:28:20.210 ' 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:20.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.210 --rc genhtml_branch_coverage=1 00:28:20.210 --rc genhtml_function_coverage=1 00:28:20.210 --rc genhtml_legend=1 00:28:20.210 --rc geninfo_all_blocks=1 00:28:20.210 --rc geninfo_unexecuted_blocks=1 00:28:20.210 00:28:20.210 ' 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:20.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.210 --rc genhtml_branch_coverage=1 00:28:20.210 --rc genhtml_function_coverage=1 00:28:20.210 --rc genhtml_legend=1 00:28:20.210 --rc geninfo_all_blocks=1 00:28:20.210 --rc geninfo_unexecuted_blocks=1 00:28:20.210 00:28:20.210 ' 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:20.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.210 --rc genhtml_branch_coverage=1 00:28:20.210 --rc genhtml_function_coverage=1 00:28:20.210 --rc genhtml_legend=1 00:28:20.210 --rc geninfo_all_blocks=1 00:28:20.210 --rc geninfo_unexecuted_blocks=1 00:28:20.210 00:28:20.210 ' 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:20.210 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.4J39y1qbXr 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92625 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92625 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 92625 ']' 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:20.211 04:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:20.211 [2024-11-17 04:34:05.822931] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:20.211 [2024-11-17 04:34:05.823053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92625 ] 00:28:20.470 [2024-11-17 04:34:05.978744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.470 [2024-11-17 04:34:05.995538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.037 04:34:06 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.037 04:34:06 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:28:21.037 04:34:06 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:21.037 04:34:06 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:21.037 04:34:06 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:21.037 04:34:06 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:21.037 04:34:06 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:21.037 04:34:06 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:21.296 04:34:06 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:21.296 04:34:06 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:21.296 04:34:06 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:21.296 04:34:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:21.296 04:34:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:21.296 04:34:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:21.296 04:34:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:21.296 04:34:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:21.555 04:34:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:21.555 { 00:28:21.555 "name": "nvme0n1", 00:28:21.555 "aliases": [ 00:28:21.555 "b77f9dc1-116f-4cc7-9b3e-0f6b39508096" 00:28:21.555 ], 00:28:21.555 "product_name": "NVMe disk", 00:28:21.555 "block_size": 4096, 00:28:21.555 "num_blocks": 1310720, 00:28:21.555 "uuid": "b77f9dc1-116f-4cc7-9b3e-0f6b39508096", 00:28:21.555 "numa_id": -1, 00:28:21.555 "assigned_rate_limits": { 00:28:21.555 "rw_ios_per_sec": 0, 00:28:21.555 "rw_mbytes_per_sec": 0, 00:28:21.555 "r_mbytes_per_sec": 0, 00:28:21.555 "w_mbytes_per_sec": 0 00:28:21.555 }, 00:28:21.555 "claimed": true, 00:28:21.555 "claim_type": "read_many_write_one", 00:28:21.555 "zoned": false, 00:28:21.555 "supported_io_types": { 00:28:21.555 "read": true, 00:28:21.555 "write": true, 00:28:21.555 "unmap": true, 00:28:21.555 "flush": true, 00:28:21.555 "reset": true, 00:28:21.555 "nvme_admin": true, 00:28:21.555 "nvme_io": true, 00:28:21.555 "nvme_io_md": false, 00:28:21.555 "write_zeroes": true, 00:28:21.555 "zcopy": false, 00:28:21.555 "get_zone_info": false, 00:28:21.555 "zone_management": false, 00:28:21.555 "zone_append": false, 00:28:21.555 "compare": true, 00:28:21.555 "compare_and_write": false, 00:28:21.555 "abort": true, 00:28:21.555 "seek_hole": false, 00:28:21.555 "seek_data": false, 00:28:21.555 "copy": true, 00:28:21.555 "nvme_iov_md": false 00:28:21.555 }, 00:28:21.555 "driver_specific": { 00:28:21.555 "nvme": [ 00:28:21.555 { 00:28:21.555 "pci_address": "0000:00:11.0", 00:28:21.555 "trid": { 00:28:21.555 "trtype": "PCIe", 00:28:21.555 "traddr": "0000:00:11.0" 00:28:21.555 }, 00:28:21.555 "ctrlr_data": { 00:28:21.555 "cntlid": 0, 00:28:21.555 "vendor_id": "0x1b36", 00:28:21.555 "model_number": "QEMU NVMe Ctrl", 00:28:21.555 "serial_number": "12341", 00:28:21.555 "firmware_revision": "8.0.0", 00:28:21.556 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:21.556 "oacs": { 00:28:21.556 "security": 0, 00:28:21.556 "format": 1, 00:28:21.556 "firmware": 0, 00:28:21.556 "ns_manage": 1 00:28:21.556 }, 00:28:21.556 "multi_ctrlr": false, 00:28:21.556 "ana_reporting": false 00:28:21.556 }, 00:28:21.556 "vs": { 00:28:21.556 "nvme_version": "1.4" 00:28:21.556 }, 00:28:21.556 "ns_data": { 00:28:21.556 "id": 1, 00:28:21.556 "can_share": false 00:28:21.556 } 00:28:21.556 } 00:28:21.556 ], 00:28:21.556 "mp_policy": "active_passive" 00:28:21.556 } 00:28:21.556 } 00:28:21.556 ]' 00:28:21.556 04:34:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:21.556 04:34:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:21.556 04:34:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:21.556 04:34:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:21.556 04:34:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:21.556 04:34:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:28:21.556 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:21.556 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:21.556 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:21.556 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:21.556 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:21.815 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=90f3338a-e671-44d4-8df1-96aabbf38788 00:28:21.815 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:21.815 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 90f3338a-e671-44d4-8df1-96aabbf38788 00:28:22.073 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:22.073 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=3219cdd0-f47b-41d1-83ec-79a673b87137 00:28:22.073 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3219cdd0-f47b-41d1-83ec-79a673b87137 00:28:22.331 04:34:07 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=d32a9db9-0751-42d8-b6dc-fbaf52b7ab24 00:28:22.331 04:34:07 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:22.331 04:34:07 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d32a9db9-0751-42d8-b6dc-fbaf52b7ab24 00:28:22.331 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:22.331 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:22.331 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=d32a9db9-0751-42d8-b6dc-fbaf52b7ab24 00:28:22.331 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:22.331 04:34:07 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size d32a9db9-0751-42d8-b6dc-fbaf52b7ab24 00:28:22.331 04:34:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d32a9db9-0751-42d8-b6dc-fbaf52b7ab24 00:28:22.331 04:34:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:22.331 04:34:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:22.331 04:34:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:22.331 04:34:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d32a9db9-0751-42d8-b6dc-fbaf52b7ab24 00:28:22.590 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:22.590 { 00:28:22.590 "name": "d32a9db9-0751-42d8-b6dc-fbaf52b7ab24", 00:28:22.590 "aliases": [ 00:28:22.590 "lvs/nvme0n1p0" 00:28:22.590 ], 00:28:22.590 "product_name": "Logical Volume", 00:28:22.590 "block_size": 4096, 00:28:22.590 "num_blocks": 26476544, 00:28:22.590 "uuid": "d32a9db9-0751-42d8-b6dc-fbaf52b7ab24", 00:28:22.590 "assigned_rate_limits": { 00:28:22.590 "rw_ios_per_sec": 0, 00:28:22.590 "rw_mbytes_per_sec": 0, 00:28:22.590 "r_mbytes_per_sec": 0, 00:28:22.590 "w_mbytes_per_sec": 0 00:28:22.590 }, 00:28:22.590 "claimed": false, 00:28:22.590 "zoned": false, 00:28:22.590 "supported_io_types": { 00:28:22.590 "read": true, 00:28:22.590 "write": true, 00:28:22.590 "unmap": true, 00:28:22.590 "flush": false, 00:28:22.590 "reset": true, 00:28:22.590 "nvme_admin": false, 00:28:22.590 "nvme_io": false, 00:28:22.590 "nvme_io_md": false, 00:28:22.590 "write_zeroes": true, 00:28:22.590 "zcopy": false, 00:28:22.590 "get_zone_info": false, 00:28:22.590 "zone_management": false, 00:28:22.590 "zone_append": false, 00:28:22.590 "compare": false, 00:28:22.590 "compare_and_write": false, 00:28:22.590 "abort": false, 00:28:22.590 "seek_hole": true, 00:28:22.590 "seek_data": true, 00:28:22.590 "copy": false, 00:28:22.590 "nvme_iov_md": false 00:28:22.590 }, 00:28:22.590 "driver_specific": { 00:28:22.590 "lvol": { 00:28:22.590 "lvol_store_uuid": "3219cdd0-f47b-41d1-83ec-79a673b87137", 00:28:22.590 "base_bdev": "nvme0n1", 00:28:22.590 "thin_provision": true, 00:28:22.590 "num_allocated_clusters": 0, 00:28:22.590 "snapshot": false, 00:28:22.590 "clone": false, 00:28:22.590 "esnap_clone": false 00:28:22.590 } 00:28:22.590 } 00:28:22.590 } 00:28:22.590 ]' 00:28:22.590 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:22.590 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:22.590 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:22.590 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:22.590 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:22.590 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:22.590 04:34:08 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:22.590 04:34:08 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:22.590 04:34:08 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:22.848 04:34:08 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:22.849 04:34:08 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:22.849 04:34:08 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size d32a9db9-0751-42d8-b6dc-fbaf52b7ab24 00:28:22.849 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d32a9db9-0751-42d8-b6dc-fbaf52b7ab24 00:28:22.849 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:22.849 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:22.849 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:22.849 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d32a9db9-0751-42d8-b6dc-fbaf52b7ab24 00:28:23.107 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:23.107 { 00:28:23.107 "name": "d32a9db9-0751-42d8-b6dc-fbaf52b7ab24", 00:28:23.107 "aliases": [ 00:28:23.107 "lvs/nvme0n1p0" 00:28:23.107 ], 00:28:23.107 "product_name": "Logical Volume", 00:28:23.107 "block_size": 4096, 00:28:23.107 "num_blocks": 26476544, 00:28:23.107 "uuid": "d32a9db9-0751-42d8-b6dc-fbaf52b7ab24", 00:28:23.107 "assigned_rate_limits": { 00:28:23.107 "rw_ios_per_sec": 0, 00:28:23.107 "rw_mbytes_per_sec": 0, 00:28:23.107 "r_mbytes_per_sec": 0, 00:28:23.107 "w_mbytes_per_sec": 0 00:28:23.107 }, 00:28:23.107 "claimed": false, 00:28:23.107 "zoned": false, 00:28:23.107 "supported_io_types": { 00:28:23.107 "read": true, 00:28:23.107 "write": true, 00:28:23.107 "unmap": true, 00:28:23.107 "flush": false, 00:28:23.107 "reset": true, 00:28:23.107 "nvme_admin": false, 00:28:23.107 "nvme_io": false, 00:28:23.107 "nvme_io_md": false, 00:28:23.107 "write_zeroes": true, 00:28:23.107 "zcopy": false, 00:28:23.107 "get_zone_info": false, 00:28:23.107 "zone_management": false, 00:28:23.107 "zone_append": false, 00:28:23.107 "compare": false, 00:28:23.107 "compare_and_write": false, 00:28:23.107 "abort": false, 00:28:23.107 "seek_hole": true, 00:28:23.107 "seek_data": true, 00:28:23.107 "copy": false, 00:28:23.107 "nvme_iov_md": false 00:28:23.107 }, 00:28:23.107 "driver_specific": { 00:28:23.107 "lvol": { 00:28:23.107 "lvol_store_uuid": "3219cdd0-f47b-41d1-83ec-79a673b87137", 00:28:23.107 "base_bdev": "nvme0n1", 00:28:23.107 "thin_provision": true, 00:28:23.107 "num_allocated_clusters": 0, 00:28:23.107 "snapshot": false, 00:28:23.107 "clone": false, 00:28:23.108 "esnap_clone": false 00:28:23.108 } 00:28:23.108 } 00:28:23.108 } 00:28:23.108 ]' 00:28:23.108 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:23.108 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:23.108 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:23.108 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:23.108 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:23.108 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:23.108 04:34:08 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:23.108 04:34:08 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:23.366 04:34:08 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:23.366 04:34:08 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size d32a9db9-0751-42d8-b6dc-fbaf52b7ab24 00:28:23.366 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d32a9db9-0751-42d8-b6dc-fbaf52b7ab24 00:28:23.366 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:23.366 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:23.366 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:23.366 04:34:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d32a9db9-0751-42d8-b6dc-fbaf52b7ab24 00:28:23.626 04:34:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:23.626 { 00:28:23.626 "name": "d32a9db9-0751-42d8-b6dc-fbaf52b7ab24", 00:28:23.626 "aliases": [ 00:28:23.626 "lvs/nvme0n1p0" 00:28:23.626 ], 00:28:23.626 "product_name": "Logical Volume", 00:28:23.626 "block_size": 4096, 00:28:23.626 "num_blocks": 26476544, 00:28:23.626 "uuid": "d32a9db9-0751-42d8-b6dc-fbaf52b7ab24", 00:28:23.626 "assigned_rate_limits": { 00:28:23.626 "rw_ios_per_sec": 0, 00:28:23.626 "rw_mbytes_per_sec": 0, 00:28:23.626 "r_mbytes_per_sec": 0, 00:28:23.626 "w_mbytes_per_sec": 0 00:28:23.626 }, 00:28:23.626 "claimed": false, 00:28:23.626 "zoned": false, 00:28:23.626 "supported_io_types": { 00:28:23.626 "read": true, 00:28:23.626 "write": true, 00:28:23.626 "unmap": true, 00:28:23.626 "flush": false, 00:28:23.626 "reset": true, 00:28:23.626 "nvme_admin": false, 00:28:23.626 "nvme_io": false, 00:28:23.626 "nvme_io_md": false, 00:28:23.626 "write_zeroes": true, 00:28:23.626 "zcopy": false, 00:28:23.626 "get_zone_info": false, 00:28:23.626 "zone_management": false, 00:28:23.626 "zone_append": false, 00:28:23.626 "compare": false, 00:28:23.626 "compare_and_write": false, 00:28:23.626 "abort": false, 00:28:23.626 "seek_hole": true, 00:28:23.626 "seek_data": true, 00:28:23.626 "copy": false, 00:28:23.626 "nvme_iov_md": false 00:28:23.626 }, 00:28:23.626 "driver_specific": { 00:28:23.626 "lvol": { 00:28:23.626 "lvol_store_uuid": "3219cdd0-f47b-41d1-83ec-79a673b87137", 00:28:23.626 "base_bdev": "nvme0n1", 00:28:23.626 "thin_provision": true, 00:28:23.626 "num_allocated_clusters": 0, 00:28:23.626 "snapshot": false, 00:28:23.626 "clone": false, 00:28:23.626 "esnap_clone": false 00:28:23.626 } 00:28:23.626 } 00:28:23.626 } 00:28:23.626 ]' 00:28:23.626 04:34:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:23.627 04:34:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:23.627 04:34:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:23.627 04:34:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:23.627 04:34:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:23.627 04:34:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:23.627 04:34:09 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:23.627 04:34:09 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d32a9db9-0751-42d8-b6dc-fbaf52b7ab24 --l2p_dram_limit 10' 00:28:23.627 04:34:09 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:23.627 04:34:09 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:23.627 04:34:09 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:23.627 04:34:09 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:23.627 04:34:09 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:23.627 04:34:09 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d32a9db9-0751-42d8-b6dc-fbaf52b7ab24 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:23.887 [2024-11-17 04:34:09.398041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.887 [2024-11-17 04:34:09.398085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:23.887 [2024-11-17 04:34:09.398096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:23.887 [2024-11-17 04:34:09.398104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.887 [2024-11-17 04:34:09.398146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.887 [2024-11-17 04:34:09.398155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:23.887 [2024-11-17 04:34:09.398163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:23.887 [2024-11-17 04:34:09.398173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.887 [2024-11-17 04:34:09.398192] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:23.887 [2024-11-17 04:34:09.398444] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:23.887 [2024-11-17 04:34:09.398457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.887 [2024-11-17 04:34:09.398467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:23.887 [2024-11-17 04:34:09.398474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:28:23.887 [2024-11-17 04:34:09.398481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.887 [2024-11-17 04:34:09.398531] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d54227a3-f4e5-474b-a0e2-73feee6cf014 00:28:23.887 [2024-11-17 04:34:09.399473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.887 [2024-11-17 04:34:09.399587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:23.887 [2024-11-17 04:34:09.399605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:23.887 [2024-11-17 04:34:09.399611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.887 [2024-11-17 04:34:09.404410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.887 [2024-11-17 04:34:09.404435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:23.887 [2024-11-17 04:34:09.404444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.753 ms 00:28:23.887 [2024-11-17 04:34:09.404451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.887 [2024-11-17 04:34:09.404511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.887 [2024-11-17 04:34:09.404520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:23.887 [2024-11-17 04:34:09.404535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:23.887 [2024-11-17 04:34:09.404541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.887 [2024-11-17 04:34:09.404594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.887 [2024-11-17 04:34:09.404602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:23.887 [2024-11-17 04:34:09.404610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:23.887 [2024-11-17 04:34:09.404618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.887 [2024-11-17 04:34:09.404637] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:23.887 [2024-11-17 04:34:09.405913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.887 [2024-11-17 04:34:09.405940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:23.887 [2024-11-17 04:34:09.405947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:28:23.887 [2024-11-17 04:34:09.405955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.887 [2024-11-17 04:34:09.405979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.887 [2024-11-17 04:34:09.405987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:23.887 [2024-11-17 04:34:09.405994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:23.887 [2024-11-17 04:34:09.406002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.887 [2024-11-17 04:34:09.406020] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:23.887 [2024-11-17 04:34:09.406136] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:23.887 [2024-11-17 04:34:09.406145] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:23.887 [2024-11-17 04:34:09.406159] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:23.887 [2024-11-17 04:34:09.406167] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:23.887 [2024-11-17 04:34:09.406182] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:23.887 [2024-11-17 04:34:09.406188] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:23.887 [2024-11-17 04:34:09.406197] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:23.887 [2024-11-17 04:34:09.406202] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:23.887 [2024-11-17 04:34:09.406210] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:23.887 [2024-11-17 04:34:09.406216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.887 [2024-11-17 04:34:09.406223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:23.887 [2024-11-17 04:34:09.406229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:28:23.887 [2024-11-17 04:34:09.406236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.887 [2024-11-17 04:34:09.406300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.887 [2024-11-17 04:34:09.406309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:23.887 [2024-11-17 04:34:09.406315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:23.887 [2024-11-17 04:34:09.406322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.887 [2024-11-17 04:34:09.406411] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:23.887 [2024-11-17 04:34:09.406421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:23.887 [2024-11-17 04:34:09.406427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:23.888 [2024-11-17 04:34:09.406434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:23.888 [2024-11-17 04:34:09.406441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:23.888 [2024-11-17 04:34:09.406447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:23.888 [2024-11-17 04:34:09.406452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:23.888 [2024-11-17 04:34:09.406459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:23.888 [2024-11-17 04:34:09.406464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:23.888 [2024-11-17 04:34:09.406473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:23.888 [2024-11-17 04:34:09.406478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:23.888 [2024-11-17 04:34:09.406485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:23.888 [2024-11-17 04:34:09.406491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:23.888 [2024-11-17 04:34:09.406498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:23.888 [2024-11-17 04:34:09.406504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:23.888 [2024-11-17 04:34:09.406510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:23.888 [2024-11-17 04:34:09.406515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:23.888 [2024-11-17 04:34:09.406522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:23.888 [2024-11-17 04:34:09.406527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:23.888 [2024-11-17 04:34:09.406533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:23.888 [2024-11-17 04:34:09.406538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:23.888 [2024-11-17 04:34:09.406545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:23.888 [2024-11-17 04:34:09.406550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:23.888 [2024-11-17 04:34:09.406556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:23.888 [2024-11-17 04:34:09.406562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:23.888 [2024-11-17 04:34:09.406569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:23.888 [2024-11-17 04:34:09.406575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:23.888 [2024-11-17 04:34:09.406583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:23.888 [2024-11-17 04:34:09.406588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:23.888 [2024-11-17 04:34:09.406597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:23.888 [2024-11-17 04:34:09.406602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:23.888 [2024-11-17 04:34:09.406609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:23.888 [2024-11-17 04:34:09.406615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:23.888 [2024-11-17 04:34:09.406622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:23.888 [2024-11-17 04:34:09.406628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:23.888 [2024-11-17 04:34:09.406636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:23.888 [2024-11-17 04:34:09.406644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:23.888 [2024-11-17 04:34:09.406651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:23.888 [2024-11-17 04:34:09.406657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:23.888 [2024-11-17 04:34:09.406664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:23.888 [2024-11-17 04:34:09.406670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:23.888 [2024-11-17 04:34:09.406677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:23.888 [2024-11-17 04:34:09.406683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:23.888 [2024-11-17 04:34:09.406690] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:23.888 [2024-11-17 04:34:09.406697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:23.888 [2024-11-17 04:34:09.406706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:23.888 [2024-11-17 04:34:09.406712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:23.888 [2024-11-17 04:34:09.406723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:23.888 [2024-11-17 04:34:09.406729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:23.888 [2024-11-17 04:34:09.406736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:23.888 [2024-11-17 04:34:09.406742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:23.888 [2024-11-17 04:34:09.406750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:23.888 [2024-11-17 04:34:09.406756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:23.888 [2024-11-17 04:34:09.406766] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:23.888 [2024-11-17 04:34:09.406775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:23.888 [2024-11-17 04:34:09.406784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:23.888 [2024-11-17 04:34:09.406791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:23.888 [2024-11-17 04:34:09.406799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:23.888 [2024-11-17 04:34:09.406806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:23.888 [2024-11-17 04:34:09.406814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:23.888 [2024-11-17 04:34:09.406820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:23.888 [2024-11-17 04:34:09.406829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:23.888 [2024-11-17 04:34:09.406836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:23.888 [2024-11-17 04:34:09.406843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:23.888 [2024-11-17 04:34:09.406849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:23.888 [2024-11-17 04:34:09.406857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:23.888 [2024-11-17 04:34:09.406863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:23.888 [2024-11-17 04:34:09.406871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:23.888 [2024-11-17 04:34:09.406878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:23.888 [2024-11-17 04:34:09.406885] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:23.888 [2024-11-17 04:34:09.406892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:23.888 [2024-11-17 04:34:09.406901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:23.888 [2024-11-17 04:34:09.406908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:23.888 [2024-11-17 04:34:09.406916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:23.888 [2024-11-17 04:34:09.406922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:23.888 [2024-11-17 04:34:09.406930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.888 [2024-11-17 04:34:09.406939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:23.888 [2024-11-17 04:34:09.406948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:28:23.888 [2024-11-17 04:34:09.406953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.888 [2024-11-17 04:34:09.406984] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:23.888 [2024-11-17 04:34:09.406991] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:27.182 [2024-11-17 04:34:12.785882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.182 [2024-11-17 04:34:12.786295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:27.182 [2024-11-17 04:34:12.786412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3378.875 ms 00:28:27.182 [2024-11-17 04:34:12.786444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.182 [2024-11-17 04:34:12.806406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.182 [2024-11-17 04:34:12.806636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:27.182 [2024-11-17 04:34:12.807000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.793 ms 00:28:27.182 [2024-11-17 04:34:12.807058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.182 [2024-11-17 04:34:12.807228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.182 [2024-11-17 04:34:12.807266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:27.182 [2024-11-17 04:34:12.807428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:28:27.182 [2024-11-17 04:34:12.807450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.182 [2024-11-17 04:34:12.825179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.182 [2024-11-17 04:34:12.825389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:27.182 [2024-11-17 04:34:12.825615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.566 ms 00:28:27.182 [2024-11-17 04:34:12.825630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.182 [2024-11-17 04:34:12.825681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.182 [2024-11-17 04:34:12.825692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:27.182 [2024-11-17 04:34:12.825712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:27.182 [2024-11-17 04:34:12.825722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.182 [2024-11-17 04:34:12.826496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.182 [2024-11-17 04:34:12.826536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:27.182 [2024-11-17 04:34:12.826551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:28:27.182 [2024-11-17 04:34:12.826562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.182 [2024-11-17 04:34:12.826692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.182 [2024-11-17 04:34:12.826717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:27.182 [2024-11-17 04:34:12.826730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:28:27.182 [2024-11-17 04:34:12.826740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.182 [2024-11-17 04:34:12.839281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.182 [2024-11-17 04:34:12.839333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:27.182 [2024-11-17 04:34:12.839348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.515 ms 00:28:27.182 [2024-11-17 04:34:12.839358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.182 [2024-11-17 04:34:12.851144] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:27.182 [2024-11-17 04:34:12.856245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.182 [2024-11-17 04:34:12.856299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:27.182 [2024-11-17 04:34:12.856312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.765 ms 00:28:27.182 [2024-11-17 04:34:12.856325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.443 [2024-11-17 04:34:12.945703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.443 [2024-11-17 04:34:12.945803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:27.443 [2024-11-17 04:34:12.945834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.340 ms 00:28:27.443 [2024-11-17 04:34:12.945871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.443 [2024-11-17 04:34:12.946233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.443 [2024-11-17 04:34:12.946270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:27.443 [2024-11-17 04:34:12.946289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:28:27.443 [2024-11-17 04:34:12.946310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.443 [2024-11-17 04:34:12.953542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.443 [2024-11-17 04:34:12.953603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:27.443 [2024-11-17 04:34:12.953617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.128 ms 00:28:27.443 [2024-11-17 04:34:12.953639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.443 [2024-11-17 04:34:12.958986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.443 [2024-11-17 04:34:12.959043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:27.443 [2024-11-17 04:34:12.959056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.293 ms 00:28:27.443 [2024-11-17 04:34:12.959067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.443 [2024-11-17 04:34:12.959469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.443 [2024-11-17 04:34:12.959487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:27.443 [2024-11-17 04:34:12.959497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:28:27.443 [2024-11-17 04:34:12.959510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.443 [2024-11-17 04:34:13.007113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.443 [2024-11-17 04:34:13.007325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:27.443 [2024-11-17 04:34:13.007347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.578 ms 00:28:27.443 [2024-11-17 04:34:13.007368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.443 [2024-11-17 04:34:13.015120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.443 [2024-11-17 04:34:13.015175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:27.443 [2024-11-17 04:34:13.015187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.639 ms 00:28:27.443 [2024-11-17 04:34:13.015199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.443 [2024-11-17 04:34:13.020743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.443 [2024-11-17 04:34:13.020794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:27.443 [2024-11-17 04:34:13.020806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.491 ms 00:28:27.443 [2024-11-17 04:34:13.020818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.443 [2024-11-17 04:34:13.026647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.443 [2024-11-17 04:34:13.026697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:27.443 [2024-11-17 04:34:13.026710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.782 ms 00:28:27.443 [2024-11-17 04:34:13.026724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.443 [2024-11-17 04:34:13.026775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.443 [2024-11-17 04:34:13.026790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:27.443 [2024-11-17 04:34:13.026800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:27.443 [2024-11-17 04:34:13.026821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.443 [2024-11-17 04:34:13.026905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.443 [2024-11-17 04:34:13.026921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:27.444 [2024-11-17 04:34:13.026931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:27.444 [2024-11-17 04:34:13.026943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.444 [2024-11-17 04:34:13.028294] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3629.696 ms, result 0 00:28:27.444 { 00:28:27.444 "name": "ftl0", 00:28:27.444 "uuid": "d54227a3-f4e5-474b-a0e2-73feee6cf014" 00:28:27.444 } 00:28:27.444 04:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:27.444 04:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:27.705 04:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:27.705 04:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:27.967 [2024-11-17 04:34:13.472052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.967 [2024-11-17 04:34:13.472260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:27.967 [2024-11-17 04:34:13.472335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:27.967 [2024-11-17 04:34:13.472365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.967 [2024-11-17 04:34:13.472447] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:27.967 [2024-11-17 04:34:13.473487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.967 [2024-11-17 04:34:13.473639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:27.967 [2024-11-17 04:34:13.473698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:28:27.967 [2024-11-17 04:34:13.473768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.967 [2024-11-17 04:34:13.474079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.967 [2024-11-17 04:34:13.474164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:27.967 [2024-11-17 04:34:13.474192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:28:27.967 [2024-11-17 04:34:13.474257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.967 [2024-11-17 04:34:13.477551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.967 [2024-11-17 04:34:13.477649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:27.967 [2024-11-17 04:34:13.477700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 00:28:27.967 [2024-11-17 04:34:13.477727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.967 [2024-11-17 04:34:13.483987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.967 [2024-11-17 04:34:13.484138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:27.967 [2024-11-17 04:34:13.484198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.220 ms 00:28:27.967 [2024-11-17 04:34:13.484224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.967 [2024-11-17 04:34:13.487309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.967 [2024-11-17 04:34:13.487499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:27.967 [2024-11-17 04:34:13.487558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.983 ms 00:28:27.967 [2024-11-17 04:34:13.487586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.967 [2024-11-17 04:34:13.494899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.967 [2024-11-17 04:34:13.495073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:27.967 [2024-11-17 04:34:13.495135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.260 ms 00:28:27.967 [2024-11-17 04:34:13.495150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.967 [2024-11-17 04:34:13.495553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.967 [2024-11-17 04:34:13.495614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:27.967 [2024-11-17 04:34:13.495632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:28:27.967 [2024-11-17 04:34:13.495645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.967 [2024-11-17 04:34:13.499075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.967 [2024-11-17 04:34:13.499135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:27.967 [2024-11-17 04:34:13.499146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.410 ms 00:28:27.967 [2024-11-17 04:34:13.499157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.967 [2024-11-17 04:34:13.501781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.967 [2024-11-17 04:34:13.501844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:27.967 [2024-11-17 04:34:13.501856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.574 ms 00:28:27.967 [2024-11-17 04:34:13.501867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.967 [2024-11-17 04:34:13.504042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.967 [2024-11-17 04:34:13.504098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:27.967 [2024-11-17 04:34:13.504109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.128 ms 00:28:27.967 [2024-11-17 04:34:13.504120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.967 [2024-11-17 04:34:13.506437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.967 [2024-11-17 04:34:13.506491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:27.967 [2024-11-17 04:34:13.506502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:28:27.967 [2024-11-17 04:34:13.506513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.967 [2024-11-17 04:34:13.506558] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:27.967 [2024-11-17 04:34:13.506582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:27.967 [2024-11-17 04:34:13.506818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.506991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:27.968 [2024-11-17 04:34:13.507620] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:27.968 [2024-11-17 04:34:13.507630] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d54227a3-f4e5-474b-a0e2-73feee6cf014 00:28:27.968 [2024-11-17 04:34:13.507641] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:27.968 [2024-11-17 04:34:13.507648] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:27.968 [2024-11-17 04:34:13.507659] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:27.968 [2024-11-17 04:34:13.507668] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:27.968 [2024-11-17 04:34:13.507680] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:27.968 [2024-11-17 04:34:13.507691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:27.968 [2024-11-17 04:34:13.507701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:27.968 [2024-11-17 04:34:13.507707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:27.968 [2024-11-17 04:34:13.507716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:27.968 [2024-11-17 04:34:13.507725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.968 [2024-11-17 04:34:13.507736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:27.968 [2024-11-17 04:34:13.507745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:28:27.968 [2024-11-17 04:34:13.507756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.968 [2024-11-17 04:34:13.510872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.968 [2024-11-17 04:34:13.510914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:27.968 [2024-11-17 04:34:13.510927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.094 ms 00:28:27.969 [2024-11-17 04:34:13.510941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.969 [2024-11-17 04:34:13.511118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.969 [2024-11-17 04:34:13.511133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:27.969 [2024-11-17 04:34:13.511143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:28:27.969 [2024-11-17 04:34:13.511154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.969 [2024-11-17 04:34:13.521933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.969 [2024-11-17 04:34:13.521987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:27.969 [2024-11-17 04:34:13.521999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.969 [2024-11-17 04:34:13.522014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.969 [2024-11-17 04:34:13.522078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.969 [2024-11-17 04:34:13.522091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:27.969 [2024-11-17 04:34:13.522107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.969 [2024-11-17 04:34:13.522118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.969 [2024-11-17 04:34:13.522206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.969 [2024-11-17 04:34:13.522226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:27.969 [2024-11-17 04:34:13.522235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.969 [2024-11-17 04:34:13.522247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.969 [2024-11-17 04:34:13.522268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.969 [2024-11-17 04:34:13.522280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:27.969 [2024-11-17 04:34:13.522288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.969 [2024-11-17 04:34:13.522300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.969 [2024-11-17 04:34:13.541436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.969 [2024-11-17 04:34:13.541718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:27.969 [2024-11-17 04:34:13.541739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.969 [2024-11-17 04:34:13.541754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.969 [2024-11-17 04:34:13.556999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.969 [2024-11-17 04:34:13.557239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:27.969 [2024-11-17 04:34:13.557259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.969 [2024-11-17 04:34:13.557271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.969 [2024-11-17 04:34:13.557369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.969 [2024-11-17 04:34:13.557413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:27.969 [2024-11-17 04:34:13.557422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.969 [2024-11-17 04:34:13.557435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.969 [2024-11-17 04:34:13.557487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.969 [2024-11-17 04:34:13.557503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:27.969 [2024-11-17 04:34:13.557512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.969 [2024-11-17 04:34:13.557523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.969 [2024-11-17 04:34:13.557620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.969 [2024-11-17 04:34:13.557636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:27.969 [2024-11-17 04:34:13.557646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.969 [2024-11-17 04:34:13.557657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.969 [2024-11-17 04:34:13.557694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.969 [2024-11-17 04:34:13.557710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:27.969 [2024-11-17 04:34:13.557720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.969 [2024-11-17 04:34:13.557731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.969 [2024-11-17 04:34:13.557783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.969 [2024-11-17 04:34:13.557800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:27.969 [2024-11-17 04:34:13.557811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.969 [2024-11-17 04:34:13.557822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.969 [2024-11-17 04:34:13.557880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.969 [2024-11-17 04:34:13.557905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:27.969 [2024-11-17 04:34:13.557914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.969 [2024-11-17 04:34:13.557925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.969 [2024-11-17 04:34:13.558097] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.992 ms, result 0 00:28:27.969 true 00:28:27.969 04:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92625 00:28:27.969 04:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 92625 ']' 00:28:27.969 04:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 92625 00:28:27.969 04:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:28:27.969 04:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:27.969 04:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92625 00:28:27.969 killing process with pid 92625 00:28:27.969 04:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:27.969 04:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:27.969 04:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92625' 00:28:27.969 04:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 92625 00:28:27.969 04:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 92625 00:28:32.177 04:34:17 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:36.503 262144+0 records in 00:28:36.503 262144+0 records out 00:28:36.503 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.1214 s, 261 MB/s 00:28:36.503 04:34:21 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:38.418 04:34:23 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:38.418 [2024-11-17 04:34:23.760835] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:38.418 [2024-11-17 04:34:23.761674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92842 ] 00:28:38.418 [2024-11-17 04:34:23.927630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.418 [2024-11-17 04:34:23.946978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.418 [2024-11-17 04:34:24.043314] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:38.418 [2024-11-17 04:34:24.043406] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:38.681 [2024-11-17 04:34:24.205684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.681 [2024-11-17 04:34:24.205904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:38.682 [2024-11-17 04:34:24.205930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:38.682 [2024-11-17 04:34:24.205941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.682 [2024-11-17 04:34:24.206010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.682 [2024-11-17 04:34:24.206027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:38.682 [2024-11-17 04:34:24.206035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:38.682 [2024-11-17 04:34:24.206043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.682 [2024-11-17 04:34:24.206068] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:38.682 [2024-11-17 04:34:24.206331] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:38.682 [2024-11-17 04:34:24.206351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.682 [2024-11-17 04:34:24.206360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:38.682 [2024-11-17 04:34:24.206370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:28:38.682 [2024-11-17 04:34:24.206411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.682 [2024-11-17 04:34:24.208031] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:38.682 [2024-11-17 04:34:24.211627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.682 [2024-11-17 04:34:24.211680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:38.682 [2024-11-17 04:34:24.211693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.598 ms 00:28:38.682 [2024-11-17 04:34:24.211710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.682 [2024-11-17 04:34:24.211786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.682 [2024-11-17 04:34:24.211799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:38.682 [2024-11-17 04:34:24.211809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:38.682 [2024-11-17 04:34:24.211816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.682 [2024-11-17 04:34:24.219879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.682 [2024-11-17 04:34:24.219926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:38.682 [2024-11-17 04:34:24.219940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.021 ms 00:28:38.682 [2024-11-17 04:34:24.219953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.682 [2024-11-17 04:34:24.220051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.682 [2024-11-17 04:34:24.220066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:38.682 [2024-11-17 04:34:24.220075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:28:38.682 [2024-11-17 04:34:24.220086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.682 [2024-11-17 04:34:24.220147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.682 [2024-11-17 04:34:24.220159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:38.682 [2024-11-17 04:34:24.220168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:38.682 [2024-11-17 04:34:24.220181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.682 [2024-11-17 04:34:24.220206] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:38.682 [2024-11-17 04:34:24.222315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.682 [2024-11-17 04:34:24.222356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:38.682 [2024-11-17 04:34:24.222367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.114 ms 00:28:38.682 [2024-11-17 04:34:24.222399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.682 [2024-11-17 04:34:24.222433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.682 [2024-11-17 04:34:24.222443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:38.682 [2024-11-17 04:34:24.222453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:38.682 [2024-11-17 04:34:24.222461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.682 [2024-11-17 04:34:24.222492] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:38.682 [2024-11-17 04:34:24.222514] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:38.682 [2024-11-17 04:34:24.222552] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:38.682 [2024-11-17 04:34:24.222574] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:38.682 [2024-11-17 04:34:24.222681] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:38.682 [2024-11-17 04:34:24.222694] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:38.682 [2024-11-17 04:34:24.222705] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:38.682 [2024-11-17 04:34:24.222723] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:38.682 [2024-11-17 04:34:24.222733] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:38.682 [2024-11-17 04:34:24.222742] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:38.682 [2024-11-17 04:34:24.222750] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:38.682 [2024-11-17 04:34:24.222759] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:38.682 [2024-11-17 04:34:24.222768] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:38.682 [2024-11-17 04:34:24.222777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.682 [2024-11-17 04:34:24.222785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:38.682 [2024-11-17 04:34:24.222798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:28:38.682 [2024-11-17 04:34:24.222809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.682 [2024-11-17 04:34:24.222891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.682 [2024-11-17 04:34:24.222903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:38.682 [2024-11-17 04:34:24.222911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:38.682 [2024-11-17 04:34:24.222918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.682 [2024-11-17 04:34:24.223017] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:38.682 [2024-11-17 04:34:24.223029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:38.682 [2024-11-17 04:34:24.223038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:38.682 [2024-11-17 04:34:24.223047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.682 [2024-11-17 04:34:24.223056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:38.682 [2024-11-17 04:34:24.223072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:38.682 [2024-11-17 04:34:24.223080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:38.682 [2024-11-17 04:34:24.223090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:38.682 [2024-11-17 04:34:24.223098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:38.682 [2024-11-17 04:34:24.223109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:38.682 [2024-11-17 04:34:24.223120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:38.682 [2024-11-17 04:34:24.223128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:38.682 [2024-11-17 04:34:24.223136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:38.682 [2024-11-17 04:34:24.223144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:38.682 [2024-11-17 04:34:24.223154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:38.682 [2024-11-17 04:34:24.223164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.682 [2024-11-17 04:34:24.223174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:38.682 [2024-11-17 04:34:24.223183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:38.682 [2024-11-17 04:34:24.223191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.682 [2024-11-17 04:34:24.223199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:38.682 [2024-11-17 04:34:24.223207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:38.682 [2024-11-17 04:34:24.223216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.682 [2024-11-17 04:34:24.223225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:38.682 [2024-11-17 04:34:24.223232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:38.682 [2024-11-17 04:34:24.223240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.682 [2024-11-17 04:34:24.223247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:38.682 [2024-11-17 04:34:24.223260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:38.682 [2024-11-17 04:34:24.223269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.682 [2024-11-17 04:34:24.223278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:38.682 [2024-11-17 04:34:24.223285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:38.682 [2024-11-17 04:34:24.223293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.682 [2024-11-17 04:34:24.223301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:38.682 [2024-11-17 04:34:24.223309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:38.682 [2024-11-17 04:34:24.223318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:38.682 [2024-11-17 04:34:24.223325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:38.682 [2024-11-17 04:34:24.223333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:38.682 [2024-11-17 04:34:24.223339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:38.682 [2024-11-17 04:34:24.223346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:38.682 [2024-11-17 04:34:24.223352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:38.682 [2024-11-17 04:34:24.223359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.683 [2024-11-17 04:34:24.223366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:38.683 [2024-11-17 04:34:24.223389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:38.683 [2024-11-17 04:34:24.223399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.683 [2024-11-17 04:34:24.223406] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:38.683 [2024-11-17 04:34:24.223414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:38.683 [2024-11-17 04:34:24.223428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:38.683 [2024-11-17 04:34:24.223436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.683 [2024-11-17 04:34:24.223443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:38.683 [2024-11-17 04:34:24.223451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:38.683 [2024-11-17 04:34:24.223458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:38.683 [2024-11-17 04:34:24.223465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:38.683 [2024-11-17 04:34:24.223472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:38.683 [2024-11-17 04:34:24.223480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:38.683 [2024-11-17 04:34:24.223489] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:38.683 [2024-11-17 04:34:24.223498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:38.683 [2024-11-17 04:34:24.223507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:38.683 [2024-11-17 04:34:24.223516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:38.683 [2024-11-17 04:34:24.223524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:38.683 [2024-11-17 04:34:24.223534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:38.683 [2024-11-17 04:34:24.223541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:38.683 [2024-11-17 04:34:24.223549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:38.683 [2024-11-17 04:34:24.223557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:38.683 [2024-11-17 04:34:24.223565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:38.683 [2024-11-17 04:34:24.223572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:38.683 [2024-11-17 04:34:24.223580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:38.683 [2024-11-17 04:34:24.223587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:38.683 [2024-11-17 04:34:24.223594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:38.683 [2024-11-17 04:34:24.223601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:38.683 [2024-11-17 04:34:24.223609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:38.683 [2024-11-17 04:34:24.223616] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:38.683 [2024-11-17 04:34:24.223624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:38.683 [2024-11-17 04:34:24.223633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:38.683 [2024-11-17 04:34:24.223643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:38.683 [2024-11-17 04:34:24.223652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:38.683 [2024-11-17 04:34:24.223661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:38.683 [2024-11-17 04:34:24.223669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.223676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:38.683 [2024-11-17 04:34:24.223684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:28:38.683 [2024-11-17 04:34:24.223691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.683 [2024-11-17 04:34:24.237619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.237663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:38.683 [2024-11-17 04:34:24.237682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.876 ms 00:28:38.683 [2024-11-17 04:34:24.237691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.683 [2024-11-17 04:34:24.237778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.237787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:38.683 [2024-11-17 04:34:24.237800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:38.683 [2024-11-17 04:34:24.237808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.683 [2024-11-17 04:34:24.258115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.258182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:38.683 [2024-11-17 04:34:24.258200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.248 ms 00:28:38.683 [2024-11-17 04:34:24.258218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.683 [2024-11-17 04:34:24.258277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.258297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:38.683 [2024-11-17 04:34:24.258310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:38.683 [2024-11-17 04:34:24.258332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.683 [2024-11-17 04:34:24.259013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.259065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:38.683 [2024-11-17 04:34:24.259081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:28:38.683 [2024-11-17 04:34:24.259100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.683 [2024-11-17 04:34:24.259296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.259311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:38.683 [2024-11-17 04:34:24.259323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:28:38.683 [2024-11-17 04:34:24.259333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.683 [2024-11-17 04:34:24.268437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.268710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:38.683 [2024-11-17 04:34:24.268746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.077 ms 00:28:38.683 [2024-11-17 04:34:24.268758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.683 [2024-11-17 04:34:24.272643] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:38.683 [2024-11-17 04:34:24.272694] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:38.683 [2024-11-17 04:34:24.272707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.272716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:38.683 [2024-11-17 04:34:24.272725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.835 ms 00:28:38.683 [2024-11-17 04:34:24.272733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.683 [2024-11-17 04:34:24.289081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.289134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:38.683 [2024-11-17 04:34:24.289149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.289 ms 00:28:38.683 [2024-11-17 04:34:24.289158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.683 [2024-11-17 04:34:24.292172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.292340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:38.683 [2024-11-17 04:34:24.292358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.956 ms 00:28:38.683 [2024-11-17 04:34:24.292367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.683 [2024-11-17 04:34:24.295147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.295195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:38.683 [2024-11-17 04:34:24.295206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.722 ms 00:28:38.683 [2024-11-17 04:34:24.295213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.683 [2024-11-17 04:34:24.295710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.295762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:38.683 [2024-11-17 04:34:24.295788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:28:38.683 [2024-11-17 04:34:24.295808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.683 [2024-11-17 04:34:24.319277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.319497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:38.683 [2024-11-17 04:34:24.319517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.371 ms 00:28:38.683 [2024-11-17 04:34:24.319527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.683 [2024-11-17 04:34:24.327582] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:38.683 [2024-11-17 04:34:24.330449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.330607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:38.683 [2024-11-17 04:34:24.330632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.770 ms 00:28:38.683 [2024-11-17 04:34:24.330644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.683 [2024-11-17 04:34:24.330722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.683 [2024-11-17 04:34:24.330734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:38.683 [2024-11-17 04:34:24.330743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:38.683 [2024-11-17 04:34:24.330753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.684 [2024-11-17 04:34:24.330821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.684 [2024-11-17 04:34:24.330832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:38.684 [2024-11-17 04:34:24.330841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:38.684 [2024-11-17 04:34:24.330857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.684 [2024-11-17 04:34:24.330878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.684 [2024-11-17 04:34:24.330888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:38.684 [2024-11-17 04:34:24.330898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:38.684 [2024-11-17 04:34:24.330906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.684 [2024-11-17 04:34:24.330945] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:38.684 [2024-11-17 04:34:24.330956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.684 [2024-11-17 04:34:24.330964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:38.684 [2024-11-17 04:34:24.330974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:38.684 [2024-11-17 04:34:24.330982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.684 [2024-11-17 04:34:24.335971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.684 [2024-11-17 04:34:24.336023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:38.684 [2024-11-17 04:34:24.336035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.968 ms 00:28:38.684 [2024-11-17 04:34:24.336047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.684 [2024-11-17 04:34:24.336126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.684 [2024-11-17 04:34:24.336136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:38.684 [2024-11-17 04:34:24.336152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:38.684 [2024-11-17 04:34:24.336160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.684 [2024-11-17 04:34:24.337268] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 131.131 ms, result 0 00:28:39.629  [2024-11-17T04:34:26.741Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-17T04:34:27.678Z] Copying: 24/1024 [MB] (14 MBps) [2024-11-17T04:34:28.611Z] Copying: 42/1024 [MB] (17 MBps) [2024-11-17T04:34:29.553Z] Copying: 82/1024 [MB] (39 MBps) [2024-11-17T04:34:30.509Z] Copying: 109/1024 [MB] (27 MBps) [2024-11-17T04:34:31.449Z] Copying: 127/1024 [MB] (17 MBps) [2024-11-17T04:34:32.392Z] Copying: 143/1024 [MB] (16 MBps) [2024-11-17T04:34:33.780Z] Copying: 153/1024 [MB] (10 MBps) [2024-11-17T04:34:34.714Z] Copying: 166/1024 [MB] (12 MBps) [2024-11-17T04:34:35.656Z] Copying: 189/1024 [MB] (22 MBps) [2024-11-17T04:34:36.599Z] Copying: 205/1024 [MB] (15 MBps) [2024-11-17T04:34:37.542Z] Copying: 215/1024 [MB] (10 MBps) [2024-11-17T04:34:38.485Z] Copying: 233/1024 [MB] (17 MBps) [2024-11-17T04:34:39.423Z] Copying: 245/1024 [MB] (11 MBps) [2024-11-17T04:34:40.358Z] Copying: 258/1024 [MB] (13 MBps) [2024-11-17T04:34:41.738Z] Copying: 286/1024 [MB] (27 MBps) [2024-11-17T04:34:42.677Z] Copying: 315/1024 [MB] (29 MBps) [2024-11-17T04:34:43.615Z] Copying: 334/1024 [MB] (18 MBps) [2024-11-17T04:34:44.551Z] Copying: 346/1024 [MB] (12 MBps) [2024-11-17T04:34:45.487Z] Copying: 380/1024 [MB] (33 MBps) [2024-11-17T04:34:46.421Z] Copying: 396/1024 [MB] (16 MBps) [2024-11-17T04:34:47.359Z] Copying: 425/1024 [MB] (28 MBps) [2024-11-17T04:34:48.739Z] Copying: 452/1024 [MB] (27 MBps) [2024-11-17T04:34:49.684Z] Copying: 464/1024 [MB] (11 MBps) [2024-11-17T04:34:50.617Z] Copying: 481/1024 [MB] (17 MBps) [2024-11-17T04:34:51.560Z] Copying: 503/1024 [MB] (22 MBps) [2024-11-17T04:34:52.501Z] Copying: 525/1024 [MB] (22 MBps) [2024-11-17T04:34:53.497Z] Copying: 541/1024 [MB] (15 MBps) [2024-11-17T04:34:54.432Z] Copying: 574/1024 [MB] (32 MBps) [2024-11-17T04:34:55.376Z] Copying: 609/1024 [MB] (35 MBps) [2024-11-17T04:34:56.755Z] Copying: 620/1024 [MB] (10 MBps) [2024-11-17T04:34:57.688Z] Copying: 636/1024 [MB] (16 MBps) [2024-11-17T04:34:58.621Z] Copying: 661/1024 [MB] (25 MBps) [2024-11-17T04:34:59.560Z] Copying: 693/1024 [MB] (31 MBps) [2024-11-17T04:35:00.499Z] Copying: 714/1024 [MB] (21 MBps) [2024-11-17T04:35:01.433Z] Copying: 728/1024 [MB] (13 MBps) [2024-11-17T04:35:02.368Z] Copying: 750/1024 [MB] (22 MBps) [2024-11-17T04:35:03.750Z] Copying: 775/1024 [MB] (24 MBps) [2024-11-17T04:35:04.695Z] Copying: 795/1024 [MB] (19 MBps) [2024-11-17T04:35:05.640Z] Copying: 810/1024 [MB] (15 MBps) [2024-11-17T04:35:06.584Z] Copying: 825/1024 [MB] (15 MBps) [2024-11-17T04:35:07.528Z] Copying: 843/1024 [MB] (18 MBps) [2024-11-17T04:35:08.473Z] Copying: 856/1024 [MB] (12 MBps) [2024-11-17T04:35:09.416Z] Copying: 872/1024 [MB] (15 MBps) [2024-11-17T04:35:10.359Z] Copying: 884/1024 [MB] (12 MBps) [2024-11-17T04:35:11.748Z] Copying: 901/1024 [MB] (16 MBps) [2024-11-17T04:35:12.691Z] Copying: 916/1024 [MB] (14 MBps) [2024-11-17T04:35:13.633Z] Copying: 932/1024 [MB] (16 MBps) [2024-11-17T04:35:14.578Z] Copying: 957/1024 [MB] (24 MBps) [2024-11-17T04:35:15.522Z] Copying: 974/1024 [MB] (17 MBps) [2024-11-17T04:35:16.465Z] Copying: 988/1024 [MB] (14 MBps) [2024-11-17T04:35:17.409Z] Copying: 1008/1024 [MB] (19 MBps) [2024-11-17T04:35:17.672Z] Copying: 1021/1024 [MB] (12 MBps) [2024-11-17T04:35:17.672Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-17 04:35:17.494728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.945 [2024-11-17 04:35:17.494773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:31.945 [2024-11-17 04:35:17.494785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:31.945 [2024-11-17 04:35:17.494792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.945 [2024-11-17 04:35:17.494812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:31.945 [2024-11-17 04:35:17.495336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.945 [2024-11-17 04:35:17.495351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:31.945 [2024-11-17 04:35:17.495360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:29:31.945 [2024-11-17 04:35:17.495371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.945 [2024-11-17 04:35:17.497735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.945 [2024-11-17 04:35:17.497762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:31.945 [2024-11-17 04:35:17.497770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.332 ms 00:29:31.945 [2024-11-17 04:35:17.497777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.945 [2024-11-17 04:35:17.497800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.945 [2024-11-17 04:35:17.497807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:31.945 [2024-11-17 04:35:17.497814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:31.945 [2024-11-17 04:35:17.497820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.945 [2024-11-17 04:35:17.497863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.945 [2024-11-17 04:35:17.497870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:31.945 [2024-11-17 04:35:17.497880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:31.945 [2024-11-17 04:35:17.497886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.945 [2024-11-17 04:35:17.497896] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:31.945 [2024-11-17 04:35:17.497907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:31.945 [2024-11-17 04:35:17.497919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:31.945 [2024-11-17 04:35:17.497925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:31.945 [2024-11-17 04:35:17.497931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:31.945 [2024-11-17 04:35:17.497937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:31.945 [2024-11-17 04:35:17.497942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:31.945 [2024-11-17 04:35:17.497948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:31.945 [2024-11-17 04:35:17.497955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:31.945 [2024-11-17 04:35:17.497961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:31.945 [2024-11-17 04:35:17.497967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:31.945 [2024-11-17 04:35:17.497972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:31.945 [2024-11-17 04:35:17.497978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:31.945 [2024-11-17 04:35:17.497992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:31.945 [2024-11-17 04:35:17.497997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:31.945 [2024-11-17 04:35:17.498003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:31.946 [2024-11-17 04:35:17.498523] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:31.946 [2024-11-17 04:35:17.498533] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d54227a3-f4e5-474b-a0e2-73feee6cf014 00:29:31.946 [2024-11-17 04:35:17.498539] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:31.946 [2024-11-17 04:35:17.498545] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:31.947 [2024-11-17 04:35:17.498551] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:31.947 [2024-11-17 04:35:17.498557] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:31.947 [2024-11-17 04:35:17.498562] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:31.947 [2024-11-17 04:35:17.498568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:31.947 [2024-11-17 04:35:17.498573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:31.947 [2024-11-17 04:35:17.498578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:31.947 [2024-11-17 04:35:17.498583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:31.947 [2024-11-17 04:35:17.498588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.947 [2024-11-17 04:35:17.498594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:31.947 [2024-11-17 04:35:17.498601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:29:31.947 [2024-11-17 04:35:17.498610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.947 [2024-11-17 04:35:17.500268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.947 [2024-11-17 04:35:17.500288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:31.947 [2024-11-17 04:35:17.500297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.648 ms 00:29:31.947 [2024-11-17 04:35:17.500302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.947 [2024-11-17 04:35:17.500404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.947 [2024-11-17 04:35:17.500412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:31.947 [2024-11-17 04:35:17.500425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:29:31.947 [2024-11-17 04:35:17.500431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.947 [2024-11-17 04:35:17.506118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.947 [2024-11-17 04:35:17.506212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:31.947 [2024-11-17 04:35:17.506299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.947 [2024-11-17 04:35:17.506319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.947 [2024-11-17 04:35:17.506386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.947 [2024-11-17 04:35:17.506442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:31.947 [2024-11-17 04:35:17.506585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.947 [2024-11-17 04:35:17.506604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.947 [2024-11-17 04:35:17.506666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.947 [2024-11-17 04:35:17.506687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:31.947 [2024-11-17 04:35:17.506702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.947 [2024-11-17 04:35:17.506717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.947 [2024-11-17 04:35:17.506738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.947 [2024-11-17 04:35:17.506807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:31.947 [2024-11-17 04:35:17.506891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.947 [2024-11-17 04:35:17.506912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.947 [2024-11-17 04:35:17.517671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.947 [2024-11-17 04:35:17.517793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:31.947 [2024-11-17 04:35:17.517834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.947 [2024-11-17 04:35:17.517852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.947 [2024-11-17 04:35:17.526166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.947 [2024-11-17 04:35:17.526286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:31.947 [2024-11-17 04:35:17.526328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.947 [2024-11-17 04:35:17.526352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.947 [2024-11-17 04:35:17.526447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.947 [2024-11-17 04:35:17.526470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:31.947 [2024-11-17 04:35:17.526510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.947 [2024-11-17 04:35:17.526545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.947 [2024-11-17 04:35:17.526580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.947 [2024-11-17 04:35:17.526614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:31.947 [2024-11-17 04:35:17.526633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.947 [2024-11-17 04:35:17.526648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.947 [2024-11-17 04:35:17.526706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.947 [2024-11-17 04:35:17.526786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:31.947 [2024-11-17 04:35:17.526805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.947 [2024-11-17 04:35:17.526820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.947 [2024-11-17 04:35:17.526852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.947 [2024-11-17 04:35:17.526898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:31.947 [2024-11-17 04:35:17.526916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.947 [2024-11-17 04:35:17.526932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.947 [2024-11-17 04:35:17.526986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.947 [2024-11-17 04:35:17.527005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:31.947 [2024-11-17 04:35:17.527060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.947 [2024-11-17 04:35:17.527075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.947 [2024-11-17 04:35:17.527121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.947 [2024-11-17 04:35:17.527141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:31.947 [2024-11-17 04:35:17.527178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.947 [2024-11-17 04:35:17.527218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.947 [2024-11-17 04:35:17.527346] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 32.583 ms, result 0 00:29:32.208 00:29:32.208 00:29:32.208 04:35:17 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:32.208 [2024-11-17 04:35:17.932230] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:32.208 [2024-11-17 04:35:17.932484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93388 ] 00:29:32.469 [2024-11-17 04:35:18.086688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.469 [2024-11-17 04:35:18.111675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.731 [2024-11-17 04:35:18.210520] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:32.731 [2024-11-17 04:35:18.210695] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:32.731 [2024-11-17 04:35:18.365721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.731 [2024-11-17 04:35:18.365836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:32.731 [2024-11-17 04:35:18.365887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:32.731 [2024-11-17 04:35:18.365907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.731 [2024-11-17 04:35:18.365966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.731 [2024-11-17 04:35:18.365986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:32.731 [2024-11-17 04:35:18.366002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:32.731 [2024-11-17 04:35:18.366017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.731 [2024-11-17 04:35:18.366047] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:32.731 [2024-11-17 04:35:18.366260] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:32.731 [2024-11-17 04:35:18.366298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.731 [2024-11-17 04:35:18.366317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:32.731 [2024-11-17 04:35:18.366335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:29:32.731 [2024-11-17 04:35:18.366352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.731 [2024-11-17 04:35:18.366627] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:32.731 [2024-11-17 04:35:18.366666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.731 [2024-11-17 04:35:18.366684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:32.731 [2024-11-17 04:35:18.366701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:32.731 [2024-11-17 04:35:18.366719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.731 [2024-11-17 04:35:18.366815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.731 [2024-11-17 04:35:18.366839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:32.731 [2024-11-17 04:35:18.366854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:32.731 [2024-11-17 04:35:18.366871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.731 [2024-11-17 04:35:18.367076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.731 [2024-11-17 04:35:18.367099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:32.731 [2024-11-17 04:35:18.367118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:29:32.731 [2024-11-17 04:35:18.367132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.731 [2024-11-17 04:35:18.367243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.731 [2024-11-17 04:35:18.367264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:32.731 [2024-11-17 04:35:18.367281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:32.731 [2024-11-17 04:35:18.367295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.731 [2024-11-17 04:35:18.367321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.732 [2024-11-17 04:35:18.367369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:32.732 [2024-11-17 04:35:18.367442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:32.732 [2024-11-17 04:35:18.367460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.732 [2024-11-17 04:35:18.367508] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:32.732 [2024-11-17 04:35:18.369158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.732 [2024-11-17 04:35:18.369247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:32.732 [2024-11-17 04:35:18.369288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:29:32.732 [2024-11-17 04:35:18.369304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.732 [2024-11-17 04:35:18.369343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.732 [2024-11-17 04:35:18.369359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:32.732 [2024-11-17 04:35:18.369388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:32.732 [2024-11-17 04:35:18.369451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.732 [2024-11-17 04:35:18.369517] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:32.732 [2024-11-17 04:35:18.369545] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:32.732 [2024-11-17 04:35:18.369577] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:32.732 [2024-11-17 04:35:18.369591] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:32.732 [2024-11-17 04:35:18.369672] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:32.732 [2024-11-17 04:35:18.369685] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:32.732 [2024-11-17 04:35:18.369694] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:32.732 [2024-11-17 04:35:18.369702] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:32.732 [2024-11-17 04:35:18.369711] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:32.732 [2024-11-17 04:35:18.369721] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:32.732 [2024-11-17 04:35:18.369728] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:32.732 [2024-11-17 04:35:18.369733] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:32.732 [2024-11-17 04:35:18.369739] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:32.732 [2024-11-17 04:35:18.369745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.732 [2024-11-17 04:35:18.369750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:32.732 [2024-11-17 04:35:18.369757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:29:32.732 [2024-11-17 04:35:18.369762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.732 [2024-11-17 04:35:18.369826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.732 [2024-11-17 04:35:18.369834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:32.732 [2024-11-17 04:35:18.369844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:32.732 [2024-11-17 04:35:18.369849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.732 [2024-11-17 04:35:18.369936] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:32.732 [2024-11-17 04:35:18.369945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:32.732 [2024-11-17 04:35:18.369954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:32.732 [2024-11-17 04:35:18.369960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.732 [2024-11-17 04:35:18.369966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:32.732 [2024-11-17 04:35:18.369972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:32.732 [2024-11-17 04:35:18.369977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:32.732 [2024-11-17 04:35:18.369988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:32.732 [2024-11-17 04:35:18.369993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:32.732 [2024-11-17 04:35:18.369998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:32.732 [2024-11-17 04:35:18.370004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:32.732 [2024-11-17 04:35:18.370010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:32.732 [2024-11-17 04:35:18.370017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:32.732 [2024-11-17 04:35:18.370023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:32.732 [2024-11-17 04:35:18.370028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:32.732 [2024-11-17 04:35:18.370034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.732 [2024-11-17 04:35:18.370039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:32.732 [2024-11-17 04:35:18.370044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:32.732 [2024-11-17 04:35:18.370051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.732 [2024-11-17 04:35:18.370056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:32.732 [2024-11-17 04:35:18.370061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:32.732 [2024-11-17 04:35:18.370067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:32.732 [2024-11-17 04:35:18.370073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:32.732 [2024-11-17 04:35:18.370078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:32.732 [2024-11-17 04:35:18.370082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:32.732 [2024-11-17 04:35:18.370087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:32.732 [2024-11-17 04:35:18.370092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:32.732 [2024-11-17 04:35:18.370098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:32.732 [2024-11-17 04:35:18.370102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:32.732 [2024-11-17 04:35:18.370108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:32.732 [2024-11-17 04:35:18.370113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:32.732 [2024-11-17 04:35:18.370118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:32.732 [2024-11-17 04:35:18.370123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:32.732 [2024-11-17 04:35:18.370127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:32.732 [2024-11-17 04:35:18.370137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:32.732 [2024-11-17 04:35:18.370142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:32.732 [2024-11-17 04:35:18.370147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:32.732 [2024-11-17 04:35:18.370153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:32.732 [2024-11-17 04:35:18.370158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:32.732 [2024-11-17 04:35:18.370163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.732 [2024-11-17 04:35:18.370167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:32.732 [2024-11-17 04:35:18.370172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:32.732 [2024-11-17 04:35:18.370177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.732 [2024-11-17 04:35:18.370181] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:32.732 [2024-11-17 04:35:18.370192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:32.732 [2024-11-17 04:35:18.370199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:32.732 [2024-11-17 04:35:18.370205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:32.732 [2024-11-17 04:35:18.370214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:32.732 [2024-11-17 04:35:18.370220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:32.732 [2024-11-17 04:35:18.370226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:32.732 [2024-11-17 04:35:18.370234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:32.732 [2024-11-17 04:35:18.370240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:32.732 [2024-11-17 04:35:18.370246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:32.732 [2024-11-17 04:35:18.370254] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:32.732 [2024-11-17 04:35:18.370261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:32.732 [2024-11-17 04:35:18.370268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:32.732 [2024-11-17 04:35:18.370275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:32.732 [2024-11-17 04:35:18.370281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:32.732 [2024-11-17 04:35:18.370287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:32.732 [2024-11-17 04:35:18.370293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:32.732 [2024-11-17 04:35:18.370300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:32.732 [2024-11-17 04:35:18.370306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:32.732 [2024-11-17 04:35:18.370312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:32.732 [2024-11-17 04:35:18.370318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:32.732 [2024-11-17 04:35:18.370324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:32.732 [2024-11-17 04:35:18.370329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:32.733 [2024-11-17 04:35:18.370337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:32.733 [2024-11-17 04:35:18.370344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:32.733 [2024-11-17 04:35:18.370350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:32.733 [2024-11-17 04:35:18.370356] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:32.733 [2024-11-17 04:35:18.370363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:32.733 [2024-11-17 04:35:18.370369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:32.733 [2024-11-17 04:35:18.370390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:32.733 [2024-11-17 04:35:18.370396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:32.733 [2024-11-17 04:35:18.370403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:32.733 [2024-11-17 04:35:18.370409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.370419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:32.733 [2024-11-17 04:35:18.370427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:29:32.733 [2024-11-17 04:35:18.370434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.378030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.378060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:32.733 [2024-11-17 04:35:18.378067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.561 ms 00:29:32.733 [2024-11-17 04:35:18.378073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.378134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.378140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:32.733 [2024-11-17 04:35:18.378147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:29:32.733 [2024-11-17 04:35:18.378152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.398040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.398271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:32.733 [2024-11-17 04:35:18.398304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.845 ms 00:29:32.733 [2024-11-17 04:35:18.398320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.398395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.398416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:32.733 [2024-11-17 04:35:18.398432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:32.733 [2024-11-17 04:35:18.398446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.398620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.398651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:32.733 [2024-11-17 04:35:18.398675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:29:32.733 [2024-11-17 04:35:18.398688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.398908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.398942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:32.733 [2024-11-17 04:35:18.398965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:29:32.733 [2024-11-17 04:35:18.398987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.407259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.407305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:32.733 [2024-11-17 04:35:18.407323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.237 ms 00:29:32.733 [2024-11-17 04:35:18.407344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.407541] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:32.733 [2024-11-17 04:35:18.407573] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:32.733 [2024-11-17 04:35:18.407591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.407605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:32.733 [2024-11-17 04:35:18.407622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:29:32.733 [2024-11-17 04:35:18.407635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.417666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.417689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:32.733 [2024-11-17 04:35:18.417698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.001 ms 00:29:32.733 [2024-11-17 04:35:18.417707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.417802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.417809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:32.733 [2024-11-17 04:35:18.417815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:29:32.733 [2024-11-17 04:35:18.417824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.417857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.417866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:32.733 [2024-11-17 04:35:18.417875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:32.733 [2024-11-17 04:35:18.417881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.418117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.418125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:32.733 [2024-11-17 04:35:18.418131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:29:32.733 [2024-11-17 04:35:18.418139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.418151] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:32.733 [2024-11-17 04:35:18.418158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.418164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:32.733 [2024-11-17 04:35:18.418173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:32.733 [2024-11-17 04:35:18.418178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.425304] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:32.733 [2024-11-17 04:35:18.425504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.425515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:32.733 [2024-11-17 04:35:18.425522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.313 ms 00:29:32.733 [2024-11-17 04:35:18.425528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.427333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.427353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:32.733 [2024-11-17 04:35:18.427361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.786 ms 00:29:32.733 [2024-11-17 04:35:18.427367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.427434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.427442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:32.733 [2024-11-17 04:35:18.427449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:32.733 [2024-11-17 04:35:18.427457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.427477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.427484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:32.733 [2024-11-17 04:35:18.427490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:32.733 [2024-11-17 04:35:18.427495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.427524] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:32.733 [2024-11-17 04:35:18.427531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.427539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:32.733 [2024-11-17 04:35:18.427545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:32.733 [2024-11-17 04:35:18.427551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.431711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.431741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:32.733 [2024-11-17 04:35:18.431748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.145 ms 00:29:32.733 [2024-11-17 04:35:18.431758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.431812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.733 [2024-11-17 04:35:18.431820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:32.733 [2024-11-17 04:35:18.431827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:32.733 [2024-11-17 04:35:18.431832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.733 [2024-11-17 04:35:18.432651] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 66.578 ms, result 0 00:29:34.120  [2024-11-17T04:35:20.790Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-17T04:35:21.733Z] Copying: 25/1024 [MB] (10 MBps) [2024-11-17T04:35:22.676Z] Copying: 37/1024 [MB] (11 MBps) [2024-11-17T04:35:23.620Z] Copying: 49/1024 [MB] (12 MBps) [2024-11-17T04:35:24.606Z] Copying: 64/1024 [MB] (14 MBps) [2024-11-17T04:35:25.992Z] Copying: 78/1024 [MB] (13 MBps) [2024-11-17T04:35:26.935Z] Copying: 89/1024 [MB] (11 MBps) [2024-11-17T04:35:27.879Z] Copying: 100/1024 [MB] (10 MBps) [2024-11-17T04:35:28.823Z] Copying: 111/1024 [MB] (10 MBps) [2024-11-17T04:35:29.770Z] Copying: 121/1024 [MB] (10 MBps) [2024-11-17T04:35:30.716Z] Copying: 132/1024 [MB] (10 MBps) [2024-11-17T04:35:31.665Z] Copying: 142/1024 [MB] (10 MBps) [2024-11-17T04:35:32.610Z] Copying: 154/1024 [MB] (11 MBps) [2024-11-17T04:35:33.996Z] Copying: 165/1024 [MB] (10 MBps) [2024-11-17T04:35:34.939Z] Copying: 176/1024 [MB] (11 MBps) [2024-11-17T04:35:35.882Z] Copying: 188/1024 [MB] (11 MBps) [2024-11-17T04:35:36.827Z] Copying: 199/1024 [MB] (11 MBps) [2024-11-17T04:35:37.771Z] Copying: 211/1024 [MB] (11 MBps) [2024-11-17T04:35:38.717Z] Copying: 222/1024 [MB] (11 MBps) [2024-11-17T04:35:39.662Z] Copying: 234/1024 [MB] (11 MBps) [2024-11-17T04:35:40.606Z] Copying: 244/1024 [MB] (10 MBps) [2024-11-17T04:35:41.994Z] Copying: 256/1024 [MB] (11 MBps) [2024-11-17T04:35:42.940Z] Copying: 268/1024 [MB] (11 MBps) [2024-11-17T04:35:43.884Z] Copying: 279/1024 [MB] (11 MBps) [2024-11-17T04:35:44.827Z] Copying: 291/1024 [MB] (11 MBps) [2024-11-17T04:35:45.772Z] Copying: 303/1024 [MB] (11 MBps) [2024-11-17T04:35:46.717Z] Copying: 314/1024 [MB] (10 MBps) [2024-11-17T04:35:47.660Z] Copying: 325/1024 [MB] (11 MBps) [2024-11-17T04:35:48.604Z] Copying: 336/1024 [MB] (11 MBps) [2024-11-17T04:35:49.991Z] Copying: 348/1024 [MB] (11 MBps) [2024-11-17T04:35:50.937Z] Copying: 359/1024 [MB] (11 MBps) [2024-11-17T04:35:51.883Z] Copying: 371/1024 [MB] (11 MBps) [2024-11-17T04:35:52.826Z] Copying: 382/1024 [MB] (11 MBps) [2024-11-17T04:35:53.770Z] Copying: 399/1024 [MB] (17 MBps) [2024-11-17T04:35:54.711Z] Copying: 412/1024 [MB] (12 MBps) [2024-11-17T04:35:55.656Z] Copying: 424/1024 [MB] (12 MBps) [2024-11-17T04:35:56.657Z] Copying: 448/1024 [MB] (24 MBps) [2024-11-17T04:35:57.599Z] Copying: 462/1024 [MB] (14 MBps) [2024-11-17T04:35:58.985Z] Copying: 475/1024 [MB] (12 MBps) [2024-11-17T04:35:59.930Z] Copying: 489/1024 [MB] (13 MBps) [2024-11-17T04:36:00.875Z] Copying: 500/1024 [MB] (11 MBps) [2024-11-17T04:36:01.819Z] Copying: 512/1024 [MB] (11 MBps) [2024-11-17T04:36:02.764Z] Copying: 522/1024 [MB] (10 MBps) [2024-11-17T04:36:03.707Z] Copying: 533/1024 [MB] (11 MBps) [2024-11-17T04:36:04.651Z] Copying: 545/1024 [MB] (11 MBps) [2024-11-17T04:36:05.596Z] Copying: 556/1024 [MB] (11 MBps) [2024-11-17T04:36:06.981Z] Copying: 567/1024 [MB] (10 MBps) [2024-11-17T04:36:07.926Z] Copying: 578/1024 [MB] (11 MBps) [2024-11-17T04:36:08.870Z] Copying: 590/1024 [MB] (11 MBps) [2024-11-17T04:36:09.815Z] Copying: 606/1024 [MB] (16 MBps) [2024-11-17T04:36:10.760Z] Copying: 618/1024 [MB] (11 MBps) [2024-11-17T04:36:11.706Z] Copying: 630/1024 [MB] (12 MBps) [2024-11-17T04:36:12.658Z] Copying: 641/1024 [MB] (11 MBps) [2024-11-17T04:36:13.602Z] Copying: 653/1024 [MB] (11 MBps) [2024-11-17T04:36:14.988Z] Copying: 669/1024 [MB] (16 MBps) [2024-11-17T04:36:15.932Z] Copying: 685/1024 [MB] (15 MBps) [2024-11-17T04:36:16.874Z] Copying: 704/1024 [MB] (19 MBps) [2024-11-17T04:36:17.814Z] Copying: 718/1024 [MB] (13 MBps) [2024-11-17T04:36:18.753Z] Copying: 737/1024 [MB] (19 MBps) [2024-11-17T04:36:19.707Z] Copying: 753/1024 [MB] (16 MBps) [2024-11-17T04:36:20.649Z] Copying: 771/1024 [MB] (17 MBps) [2024-11-17T04:36:21.593Z] Copying: 793/1024 [MB] (22 MBps) [2024-11-17T04:36:22.982Z] Copying: 815/1024 [MB] (21 MBps) [2024-11-17T04:36:23.927Z] Copying: 825/1024 [MB] (10 MBps) [2024-11-17T04:36:24.870Z] Copying: 843/1024 [MB] (18 MBps) [2024-11-17T04:36:25.816Z] Copying: 864/1024 [MB] (20 MBps) [2024-11-17T04:36:26.761Z] Copying: 879/1024 [MB] (15 MBps) [2024-11-17T04:36:27.705Z] Copying: 894/1024 [MB] (15 MBps) [2024-11-17T04:36:28.716Z] Copying: 907/1024 [MB] (12 MBps) [2024-11-17T04:36:29.663Z] Copying: 918/1024 [MB] (11 MBps) [2024-11-17T04:36:30.608Z] Copying: 932/1024 [MB] (14 MBps) [2024-11-17T04:36:31.999Z] Copying: 944/1024 [MB] (11 MBps) [2024-11-17T04:36:32.945Z] Copying: 959/1024 [MB] (15 MBps) [2024-11-17T04:36:33.893Z] Copying: 972/1024 [MB] (12 MBps) [2024-11-17T04:36:34.840Z] Copying: 982/1024 [MB] (10 MBps) [2024-11-17T04:36:35.783Z] Copying: 993/1024 [MB] (10 MBps) [2024-11-17T04:36:36.727Z] Copying: 1012/1024 [MB] (18 MBps) [2024-11-17T04:36:36.727Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-17 04:36:36.635812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.000 [2024-11-17 04:36:36.635903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:51.000 [2024-11-17 04:36:36.635922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:51.000 [2024-11-17 04:36:36.635933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.000 [2024-11-17 04:36:36.635957] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:51.000 [2024-11-17 04:36:36.636784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.000 [2024-11-17 04:36:36.636814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:51.000 [2024-11-17 04:36:36.636827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:30:51.000 [2024-11-17 04:36:36.636850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.000 [2024-11-17 04:36:36.637097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.000 [2024-11-17 04:36:36.637108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:51.000 [2024-11-17 04:36:36.637117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:30:51.000 [2024-11-17 04:36:36.637131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.000 [2024-11-17 04:36:36.637163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.000 [2024-11-17 04:36:36.637176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:51.000 [2024-11-17 04:36:36.637185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:51.000 [2024-11-17 04:36:36.637203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.000 [2024-11-17 04:36:36.637273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.000 [2024-11-17 04:36:36.637283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:51.000 [2024-11-17 04:36:36.637292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:51.000 [2024-11-17 04:36:36.637300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.000 [2024-11-17 04:36:36.637325] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:51.000 [2024-11-17 04:36:36.637339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:51.000 [2024-11-17 04:36:36.637772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.637999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:51.001 [2024-11-17 04:36:36.638194] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:51.001 [2024-11-17 04:36:36.638206] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d54227a3-f4e5-474b-a0e2-73feee6cf014 00:30:51.001 [2024-11-17 04:36:36.638214] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:51.001 [2024-11-17 04:36:36.638225] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:51.001 [2024-11-17 04:36:36.638234] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:51.001 [2024-11-17 04:36:36.638242] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:51.001 [2024-11-17 04:36:36.638254] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:51.001 [2024-11-17 04:36:36.638262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:51.001 [2024-11-17 04:36:36.638269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:51.001 [2024-11-17 04:36:36.638278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:51.001 [2024-11-17 04:36:36.638285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:51.001 [2024-11-17 04:36:36.638292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.001 [2024-11-17 04:36:36.638301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:51.001 [2024-11-17 04:36:36.638313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:30:51.001 [2024-11-17 04:36:36.638321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.001 [2024-11-17 04:36:36.641603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.001 [2024-11-17 04:36:36.641640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:51.001 [2024-11-17 04:36:36.641651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.264 ms 00:30:51.001 [2024-11-17 04:36:36.641662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.001 [2024-11-17 04:36:36.641786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.001 [2024-11-17 04:36:36.641795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:51.001 [2024-11-17 04:36:36.641805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:30:51.001 [2024-11-17 04:36:36.641820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.001 [2024-11-17 04:36:36.650135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.001 [2024-11-17 04:36:36.650300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:51.001 [2024-11-17 04:36:36.650358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.001 [2024-11-17 04:36:36.650424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.001 [2024-11-17 04:36:36.650515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.001 [2024-11-17 04:36:36.650538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:51.001 [2024-11-17 04:36:36.650558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.001 [2024-11-17 04:36:36.650640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.001 [2024-11-17 04:36:36.650738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.001 [2024-11-17 04:36:36.650765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:51.001 [2024-11-17 04:36:36.650786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.001 [2024-11-17 04:36:36.650807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.001 [2024-11-17 04:36:36.650836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.001 [2024-11-17 04:36:36.650907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:51.001 [2024-11-17 04:36:36.650962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.001 [2024-11-17 04:36:36.650990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.001 [2024-11-17 04:36:36.664787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.001 [2024-11-17 04:36:36.664982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:51.001 [2024-11-17 04:36:36.665038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.001 [2024-11-17 04:36:36.665062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.001 [2024-11-17 04:36:36.676694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.001 [2024-11-17 04:36:36.676875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:51.002 [2024-11-17 04:36:36.676933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.002 [2024-11-17 04:36:36.676967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.002 [2024-11-17 04:36:36.677037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.002 [2024-11-17 04:36:36.677061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:51.002 [2024-11-17 04:36:36.677081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.002 [2024-11-17 04:36:36.677108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.002 [2024-11-17 04:36:36.677157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.002 [2024-11-17 04:36:36.677179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:51.002 [2024-11-17 04:36:36.677200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.002 [2024-11-17 04:36:36.677309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.002 [2024-11-17 04:36:36.677416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.002 [2024-11-17 04:36:36.677560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:51.002 [2024-11-17 04:36:36.677624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.002 [2024-11-17 04:36:36.677647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.002 [2024-11-17 04:36:36.677695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.002 [2024-11-17 04:36:36.677750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:51.002 [2024-11-17 04:36:36.677774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.002 [2024-11-17 04:36:36.677793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.002 [2024-11-17 04:36:36.678001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.002 [2024-11-17 04:36:36.678049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:51.002 [2024-11-17 04:36:36.678073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.002 [2024-11-17 04:36:36.678094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.002 [2024-11-17 04:36:36.678212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.002 [2024-11-17 04:36:36.678239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:51.002 [2024-11-17 04:36:36.678261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.002 [2024-11-17 04:36:36.678280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.002 [2024-11-17 04:36:36.678450] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 42.582 ms, result 0 00:30:51.261 00:30:51.261 00:30:51.261 04:36:36 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:53.166 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:53.166 04:36:38 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:53.166 [2024-11-17 04:36:38.571589] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:30:53.166 [2024-11-17 04:36:38.571849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94202 ] 00:30:53.166 [2024-11-17 04:36:38.730957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.166 [2024-11-17 04:36:38.761431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.166 [2024-11-17 04:36:38.875669] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:53.166 [2024-11-17 04:36:38.875749] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:53.431 [2024-11-17 04:36:39.036703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.431 [2024-11-17 04:36:39.036925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:53.431 [2024-11-17 04:36:39.036950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:53.431 [2024-11-17 04:36:39.036960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.431 [2024-11-17 04:36:39.037029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.431 [2024-11-17 04:36:39.037041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:53.431 [2024-11-17 04:36:39.037050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:53.431 [2024-11-17 04:36:39.037058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.431 [2024-11-17 04:36:39.037084] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:53.431 [2024-11-17 04:36:39.037343] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:53.431 [2024-11-17 04:36:39.037363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.431 [2024-11-17 04:36:39.037371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:53.431 [2024-11-17 04:36:39.037405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:30:53.431 [2024-11-17 04:36:39.037416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.431 [2024-11-17 04:36:39.037699] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:53.431 [2024-11-17 04:36:39.037725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.431 [2024-11-17 04:36:39.037735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:53.431 [2024-11-17 04:36:39.037751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:53.431 [2024-11-17 04:36:39.037760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.431 [2024-11-17 04:36:39.037819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.431 [2024-11-17 04:36:39.037835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:53.431 [2024-11-17 04:36:39.037843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:53.431 [2024-11-17 04:36:39.037854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.431 [2024-11-17 04:36:39.038501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.431 [2024-11-17 04:36:39.038535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:53.431 [2024-11-17 04:36:39.038547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:30:53.431 [2024-11-17 04:36:39.038555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.431 [2024-11-17 04:36:39.038647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.431 [2024-11-17 04:36:39.038659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:53.431 [2024-11-17 04:36:39.038667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:30:53.431 [2024-11-17 04:36:39.038676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.431 [2024-11-17 04:36:39.038705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.431 [2024-11-17 04:36:39.038715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:53.431 [2024-11-17 04:36:39.038723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:53.431 [2024-11-17 04:36:39.038731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.431 [2024-11-17 04:36:39.038753] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:53.431 [2024-11-17 04:36:39.040832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.431 [2024-11-17 04:36:39.040869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:53.431 [2024-11-17 04:36:39.040880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.079 ms 00:30:53.431 [2024-11-17 04:36:39.040888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.431 [2024-11-17 04:36:39.040930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.431 [2024-11-17 04:36:39.040944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:53.431 [2024-11-17 04:36:39.040953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:53.431 [2024-11-17 04:36:39.040965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.431 [2024-11-17 04:36:39.041030] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:53.431 [2024-11-17 04:36:39.041053] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:53.431 [2024-11-17 04:36:39.041096] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:53.431 [2024-11-17 04:36:39.041119] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:53.431 [2024-11-17 04:36:39.041224] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:53.431 [2024-11-17 04:36:39.041237] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:53.431 [2024-11-17 04:36:39.041248] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:53.431 [2024-11-17 04:36:39.041262] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:53.431 [2024-11-17 04:36:39.041276] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:53.431 [2024-11-17 04:36:39.041287] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:53.431 [2024-11-17 04:36:39.041296] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:53.431 [2024-11-17 04:36:39.041304] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:53.431 [2024-11-17 04:36:39.041313] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:53.431 [2024-11-17 04:36:39.041322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.431 [2024-11-17 04:36:39.041330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:53.431 [2024-11-17 04:36:39.041339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:30:53.431 [2024-11-17 04:36:39.041347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.431 [2024-11-17 04:36:39.041453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.431 [2024-11-17 04:36:39.041469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:53.431 [2024-11-17 04:36:39.041483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:30:53.431 [2024-11-17 04:36:39.041492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.431 [2024-11-17 04:36:39.041591] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:53.431 [2024-11-17 04:36:39.041604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:53.431 [2024-11-17 04:36:39.041617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:53.431 [2024-11-17 04:36:39.041626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:53.431 [2024-11-17 04:36:39.041635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:53.431 [2024-11-17 04:36:39.041643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:53.431 [2024-11-17 04:36:39.041652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:53.431 [2024-11-17 04:36:39.041666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:53.431 [2024-11-17 04:36:39.041675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:53.432 [2024-11-17 04:36:39.041683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:53.432 [2024-11-17 04:36:39.041691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:53.432 [2024-11-17 04:36:39.041699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:53.432 [2024-11-17 04:36:39.041708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:53.432 [2024-11-17 04:36:39.041716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:53.432 [2024-11-17 04:36:39.041724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:53.432 [2024-11-17 04:36:39.041731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:53.432 [2024-11-17 04:36:39.041739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:53.432 [2024-11-17 04:36:39.041747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:53.432 [2024-11-17 04:36:39.041757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:53.432 [2024-11-17 04:36:39.041766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:53.432 [2024-11-17 04:36:39.041773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:53.432 [2024-11-17 04:36:39.041781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:53.432 [2024-11-17 04:36:39.041789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:53.432 [2024-11-17 04:36:39.041797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:53.432 [2024-11-17 04:36:39.041804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:53.432 [2024-11-17 04:36:39.041813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:53.432 [2024-11-17 04:36:39.041820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:53.432 [2024-11-17 04:36:39.041828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:53.432 [2024-11-17 04:36:39.041836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:53.432 [2024-11-17 04:36:39.041844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:53.432 [2024-11-17 04:36:39.041852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:53.432 [2024-11-17 04:36:39.041860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:53.432 [2024-11-17 04:36:39.041868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:53.432 [2024-11-17 04:36:39.041876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:53.432 [2024-11-17 04:36:39.041888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:53.432 [2024-11-17 04:36:39.041897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:53.432 [2024-11-17 04:36:39.041904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:53.432 [2024-11-17 04:36:39.041913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:53.432 [2024-11-17 04:36:39.041921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:53.432 [2024-11-17 04:36:39.041929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:53.432 [2024-11-17 04:36:39.041937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:53.432 [2024-11-17 04:36:39.041944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:53.432 [2024-11-17 04:36:39.041953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:53.432 [2024-11-17 04:36:39.041960] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:53.432 [2024-11-17 04:36:39.041970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:53.432 [2024-11-17 04:36:39.041979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:53.432 [2024-11-17 04:36:39.041987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:53.432 [2024-11-17 04:36:39.041999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:53.432 [2024-11-17 04:36:39.042007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:53.432 [2024-11-17 04:36:39.042015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:53.432 [2024-11-17 04:36:39.042026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:53.432 [2024-11-17 04:36:39.042033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:53.432 [2024-11-17 04:36:39.042042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:53.432 [2024-11-17 04:36:39.042052] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:53.432 [2024-11-17 04:36:39.042062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:53.432 [2024-11-17 04:36:39.042072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:53.432 [2024-11-17 04:36:39.042081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:53.432 [2024-11-17 04:36:39.042090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:53.432 [2024-11-17 04:36:39.042098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:53.432 [2024-11-17 04:36:39.042106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:53.432 [2024-11-17 04:36:39.042115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:53.432 [2024-11-17 04:36:39.042124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:53.432 [2024-11-17 04:36:39.042132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:53.432 [2024-11-17 04:36:39.042140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:53.432 [2024-11-17 04:36:39.042148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:53.432 [2024-11-17 04:36:39.042157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:53.432 [2024-11-17 04:36:39.042169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:53.432 [2024-11-17 04:36:39.042177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:53.432 [2024-11-17 04:36:39.042186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:53.432 [2024-11-17 04:36:39.042195] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:53.432 [2024-11-17 04:36:39.042205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:53.432 [2024-11-17 04:36:39.042214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:53.432 [2024-11-17 04:36:39.042223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:53.432 [2024-11-17 04:36:39.042231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:53.432 [2024-11-17 04:36:39.042239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:53.432 [2024-11-17 04:36:39.042248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.432 [2024-11-17 04:36:39.042261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:53.432 [2024-11-17 04:36:39.042269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:30:53.432 [2024-11-17 04:36:39.042279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.432 [2024-11-17 04:36:39.051760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.432 [2024-11-17 04:36:39.051943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:53.432 [2024-11-17 04:36:39.051962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.438 ms 00:30:53.432 [2024-11-17 04:36:39.051970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.432 [2024-11-17 04:36:39.052060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.432 [2024-11-17 04:36:39.052068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:53.432 [2024-11-17 04:36:39.052076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:53.432 [2024-11-17 04:36:39.052083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.432 [2024-11-17 04:36:39.071848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.432 [2024-11-17 04:36:39.071908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:53.432 [2024-11-17 04:36:39.071922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.702 ms 00:30:53.432 [2024-11-17 04:36:39.071930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.432 [2024-11-17 04:36:39.071983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.432 [2024-11-17 04:36:39.071994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:53.432 [2024-11-17 04:36:39.072003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:53.432 [2024-11-17 04:36:39.072012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.432 [2024-11-17 04:36:39.072115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.432 [2024-11-17 04:36:39.072126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:53.432 [2024-11-17 04:36:39.072138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:53.432 [2024-11-17 04:36:39.072147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.432 [2024-11-17 04:36:39.072273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.432 [2024-11-17 04:36:39.072285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:53.432 [2024-11-17 04:36:39.072294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:30:53.432 [2024-11-17 04:36:39.072306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.432 [2024-11-17 04:36:39.080414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.432 [2024-11-17 04:36:39.080458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:53.432 [2024-11-17 04:36:39.080470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.087 ms 00:30:53.432 [2024-11-17 04:36:39.080488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.433 [2024-11-17 04:36:39.080638] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:53.433 [2024-11-17 04:36:39.080656] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:53.433 [2024-11-17 04:36:39.080669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.433 [2024-11-17 04:36:39.080680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:53.433 [2024-11-17 04:36:39.080696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:53.433 [2024-11-17 04:36:39.080705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.433 [2024-11-17 04:36:39.093931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.433 [2024-11-17 04:36:39.093972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:53.433 [2024-11-17 04:36:39.093990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.202 ms 00:30:53.433 [2024-11-17 04:36:39.093998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.433 [2024-11-17 04:36:39.094133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.433 [2024-11-17 04:36:39.094142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:53.433 [2024-11-17 04:36:39.094150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:30:53.433 [2024-11-17 04:36:39.094158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.433 [2024-11-17 04:36:39.094210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.433 [2024-11-17 04:36:39.094219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:53.433 [2024-11-17 04:36:39.094230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:53.433 [2024-11-17 04:36:39.094242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.433 [2024-11-17 04:36:39.094609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.433 [2024-11-17 04:36:39.094623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:53.433 [2024-11-17 04:36:39.094634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:30:53.433 [2024-11-17 04:36:39.094642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.433 [2024-11-17 04:36:39.094659] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:53.433 [2024-11-17 04:36:39.094670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.433 [2024-11-17 04:36:39.094678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:53.433 [2024-11-17 04:36:39.094689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:53.433 [2024-11-17 04:36:39.094696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.433 [2024-11-17 04:36:39.103959] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:53.433 [2024-11-17 04:36:39.104115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.433 [2024-11-17 04:36:39.104126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:53.433 [2024-11-17 04:36:39.104140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.402 ms 00:30:53.433 [2024-11-17 04:36:39.104148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.433 [2024-11-17 04:36:39.106589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.433 [2024-11-17 04:36:39.106620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:53.433 [2024-11-17 04:36:39.106630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.416 ms 00:30:53.433 [2024-11-17 04:36:39.106640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.433 [2024-11-17 04:36:39.106732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.433 [2024-11-17 04:36:39.106748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:53.433 [2024-11-17 04:36:39.106756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:53.433 [2024-11-17 04:36:39.106764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.433 [2024-11-17 04:36:39.106790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.433 [2024-11-17 04:36:39.106799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:53.433 [2024-11-17 04:36:39.106807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:53.433 [2024-11-17 04:36:39.106814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.433 [2024-11-17 04:36:39.106851] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:53.433 [2024-11-17 04:36:39.106861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.433 [2024-11-17 04:36:39.106868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:53.433 [2024-11-17 04:36:39.106876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:53.433 [2024-11-17 04:36:39.106883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.433 [2024-11-17 04:36:39.112481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.433 [2024-11-17 04:36:39.112539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:53.433 [2024-11-17 04:36:39.112550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.576 ms 00:30:53.433 [2024-11-17 04:36:39.112572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.433 [2024-11-17 04:36:39.112654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.433 [2024-11-17 04:36:39.112664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:53.433 [2024-11-17 04:36:39.112673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:53.433 [2024-11-17 04:36:39.112684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.433 [2024-11-17 04:36:39.113843] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 76.609 ms, result 0 00:30:54.822  [2024-11-17T04:36:41.492Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-17T04:36:42.435Z] Copying: 27/1024 [MB] (10 MBps) [2024-11-17T04:36:43.382Z] Copying: 41/1024 [MB] (14 MBps) [2024-11-17T04:36:44.328Z] Copying: 59/1024 [MB] (17 MBps) [2024-11-17T04:36:45.273Z] Copying: 75/1024 [MB] (15 MBps) [2024-11-17T04:36:46.217Z] Copying: 88/1024 [MB] (13 MBps) [2024-11-17T04:36:47.161Z] Copying: 104/1024 [MB] (16 MBps) [2024-11-17T04:36:48.549Z] Copying: 117/1024 [MB] (12 MBps) [2024-11-17T04:36:49.495Z] Copying: 129/1024 [MB] (11 MBps) [2024-11-17T04:36:50.440Z] Copying: 140/1024 [MB] (10 MBps) [2024-11-17T04:36:51.385Z] Copying: 151/1024 [MB] (11 MBps) [2024-11-17T04:36:52.331Z] Copying: 165/1024 [MB] (13 MBps) [2024-11-17T04:36:53.289Z] Copying: 180/1024 [MB] (15 MBps) [2024-11-17T04:36:54.232Z] Copying: 190/1024 [MB] (10 MBps) [2024-11-17T04:36:55.176Z] Copying: 201/1024 [MB] (10 MBps) [2024-11-17T04:36:56.560Z] Copying: 211/1024 [MB] (10 MBps) [2024-11-17T04:36:57.493Z] Copying: 221/1024 [MB] (10 MBps) [2024-11-17T04:36:58.427Z] Copying: 247/1024 [MB] (25 MBps) [2024-11-17T04:36:59.375Z] Copying: 297/1024 [MB] (49 MBps) [2024-11-17T04:37:00.356Z] Copying: 338/1024 [MB] (41 MBps) [2024-11-17T04:37:01.296Z] Copying: 361/1024 [MB] (22 MBps) [2024-11-17T04:37:02.239Z] Copying: 372/1024 [MB] (11 MBps) [2024-11-17T04:37:03.179Z] Copying: 383/1024 [MB] (11 MBps) [2024-11-17T04:37:04.565Z] Copying: 395/1024 [MB] (11 MBps) [2024-11-17T04:37:05.139Z] Copying: 406/1024 [MB] (10 MBps) [2024-11-17T04:37:06.528Z] Copying: 417/1024 [MB] (10 MBps) [2024-11-17T04:37:07.474Z] Copying: 431/1024 [MB] (13 MBps) [2024-11-17T04:37:08.419Z] Copying: 442/1024 [MB] (10 MBps) [2024-11-17T04:37:09.364Z] Copying: 452/1024 [MB] (10 MBps) [2024-11-17T04:37:10.308Z] Copying: 467/1024 [MB] (14 MBps) [2024-11-17T04:37:11.252Z] Copying: 479/1024 [MB] (12 MBps) [2024-11-17T04:37:12.195Z] Copying: 497/1024 [MB] (17 MBps) [2024-11-17T04:37:13.136Z] Copying: 512/1024 [MB] (15 MBps) [2024-11-17T04:37:14.523Z] Copying: 522/1024 [MB] (10 MBps) [2024-11-17T04:37:15.465Z] Copying: 533/1024 [MB] (10 MBps) [2024-11-17T04:37:16.409Z] Copying: 543/1024 [MB] (10 MBps) [2024-11-17T04:37:17.348Z] Copying: 553/1024 [MB] (10 MBps) [2024-11-17T04:37:18.281Z] Copying: 564/1024 [MB] (10 MBps) [2024-11-17T04:37:19.214Z] Copying: 606/1024 [MB] (42 MBps) [2024-11-17T04:37:20.147Z] Copying: 653/1024 [MB] (47 MBps) [2024-11-17T04:37:21.520Z] Copying: 703/1024 [MB] (49 MBps) [2024-11-17T04:37:22.462Z] Copying: 752/1024 [MB] (48 MBps) [2024-11-17T04:37:23.402Z] Copying: 787/1024 [MB] (34 MBps) [2024-11-17T04:37:24.346Z] Copying: 797/1024 [MB] (10 MBps) [2024-11-17T04:37:25.292Z] Copying: 807/1024 [MB] (10 MBps) [2024-11-17T04:37:26.235Z] Copying: 818/1024 [MB] (10 MBps) [2024-11-17T04:37:27.178Z] Copying: 830/1024 [MB] (11 MBps) [2024-11-17T04:37:28.566Z] Copying: 847/1024 [MB] (17 MBps) [2024-11-17T04:37:29.140Z] Copying: 861/1024 [MB] (13 MBps) [2024-11-17T04:37:30.541Z] Copying: 874/1024 [MB] (13 MBps) [2024-11-17T04:37:31.545Z] Copying: 888/1024 [MB] (14 MBps) [2024-11-17T04:37:32.490Z] Copying: 905/1024 [MB] (16 MBps) [2024-11-17T04:37:33.435Z] Copying: 920/1024 [MB] (15 MBps) [2024-11-17T04:37:34.378Z] Copying: 938/1024 [MB] (18 MBps) [2024-11-17T04:37:35.325Z] Copying: 948/1024 [MB] (10 MBps) [2024-11-17T04:37:36.271Z] Copying: 959/1024 [MB] (10 MBps) [2024-11-17T04:37:37.216Z] Copying: 969/1024 [MB] (10 MBps) [2024-11-17T04:37:38.162Z] Copying: 1002904/1048576 [kB] (10232 kBps) [2024-11-17T04:37:39.549Z] Copying: 989/1024 [MB] (10 MBps) [2024-11-17T04:37:40.490Z] Copying: 999/1024 [MB] (10 MBps) [2024-11-17T04:37:41.432Z] Copying: 1010/1024 [MB] (10 MBps) [2024-11-17T04:37:42.375Z] Copying: 1021/1024 [MB] (10 MBps) [2024-11-17T04:37:42.638Z] Copying: 1048328/1048576 [kB] (2696 kBps) [2024-11-17T04:37:42.638Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-17 04:37:42.405289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.911 [2024-11-17 04:37:42.405394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:56.911 [2024-11-17 04:37:42.405412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:56.911 [2024-11-17 04:37:42.405422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.911 [2024-11-17 04:37:42.406745] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:56.911 [2024-11-17 04:37:42.409587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.911 [2024-11-17 04:37:42.409635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:56.911 [2024-11-17 04:37:42.409648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.806 ms 00:31:56.911 [2024-11-17 04:37:42.409657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.911 [2024-11-17 04:37:42.420735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.911 [2024-11-17 04:37:42.420793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:56.911 [2024-11-17 04:37:42.420806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.366 ms 00:31:56.911 [2024-11-17 04:37:42.420814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.911 [2024-11-17 04:37:42.420843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.911 [2024-11-17 04:37:42.420858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:56.911 [2024-11-17 04:37:42.420868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:56.911 [2024-11-17 04:37:42.420876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.911 [2024-11-17 04:37:42.420932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.911 [2024-11-17 04:37:42.420942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:56.911 [2024-11-17 04:37:42.420953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:31:56.911 [2024-11-17 04:37:42.420962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.911 [2024-11-17 04:37:42.420976] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:56.911 [2024-11-17 04:37:42.420988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126720 / 261120 wr_cnt: 1 state: open 00:31:56.911 [2024-11-17 04:37:42.420998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:56.911 [2024-11-17 04:37:42.421459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:56.912 [2024-11-17 04:37:42.421811] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:56.912 [2024-11-17 04:37:42.421824] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d54227a3-f4e5-474b-a0e2-73feee6cf014 00:31:56.912 [2024-11-17 04:37:42.421832] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126720 00:31:56.912 [2024-11-17 04:37:42.421840] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 126752 00:31:56.912 [2024-11-17 04:37:42.421847] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126720 00:31:56.912 [2024-11-17 04:37:42.421855] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:31:56.912 [2024-11-17 04:37:42.421863] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:56.912 [2024-11-17 04:37:42.421873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:56.912 [2024-11-17 04:37:42.421880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:56.912 [2024-11-17 04:37:42.421887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:56.912 [2024-11-17 04:37:42.421893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:56.912 [2024-11-17 04:37:42.421901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.912 [2024-11-17 04:37:42.421910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:56.912 [2024-11-17 04:37:42.421918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:31:56.912 [2024-11-17 04:37:42.421926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.912 [2024-11-17 04:37:42.424171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.912 [2024-11-17 04:37:42.424348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:56.912 [2024-11-17 04:37:42.424393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.230 ms 00:31:56.912 [2024-11-17 04:37:42.424406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.912 [2024-11-17 04:37:42.424540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.912 [2024-11-17 04:37:42.424551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:56.912 [2024-11-17 04:37:42.424560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:31:56.912 [2024-11-17 04:37:42.424585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.912 [2024-11-17 04:37:42.431903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.912 [2024-11-17 04:37:42.431954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:56.912 [2024-11-17 04:37:42.431965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.912 [2024-11-17 04:37:42.431973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.912 [2024-11-17 04:37:42.432033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.912 [2024-11-17 04:37:42.432046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:56.912 [2024-11-17 04:37:42.432055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.912 [2024-11-17 04:37:42.432063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.912 [2024-11-17 04:37:42.432115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.912 [2024-11-17 04:37:42.432126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:56.912 [2024-11-17 04:37:42.432137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.912 [2024-11-17 04:37:42.432146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.912 [2024-11-17 04:37:42.432162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.912 [2024-11-17 04:37:42.432170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:56.912 [2024-11-17 04:37:42.432178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.912 [2024-11-17 04:37:42.432186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.912 [2024-11-17 04:37:42.445745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.912 [2024-11-17 04:37:42.445796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:56.912 [2024-11-17 04:37:42.445817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.912 [2024-11-17 04:37:42.445825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.912 [2024-11-17 04:37:42.457275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.912 [2024-11-17 04:37:42.457325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:56.912 [2024-11-17 04:37:42.457346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.912 [2024-11-17 04:37:42.457354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.912 [2024-11-17 04:37:42.457425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.912 [2024-11-17 04:37:42.457436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:56.912 [2024-11-17 04:37:42.457445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.912 [2024-11-17 04:37:42.457455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.912 [2024-11-17 04:37:42.457521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.912 [2024-11-17 04:37:42.457531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:56.912 [2024-11-17 04:37:42.457540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.912 [2024-11-17 04:37:42.457548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.912 [2024-11-17 04:37:42.457605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.913 [2024-11-17 04:37:42.457623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:56.913 [2024-11-17 04:37:42.457632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.913 [2024-11-17 04:37:42.457640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.913 [2024-11-17 04:37:42.457673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.913 [2024-11-17 04:37:42.457683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:56.913 [2024-11-17 04:37:42.457692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.913 [2024-11-17 04:37:42.457699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.913 [2024-11-17 04:37:42.457739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.913 [2024-11-17 04:37:42.457749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:56.913 [2024-11-17 04:37:42.457758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.913 [2024-11-17 04:37:42.457766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.913 [2024-11-17 04:37:42.457816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.913 [2024-11-17 04:37:42.457828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:56.913 [2024-11-17 04:37:42.457836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.913 [2024-11-17 04:37:42.457845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.913 [2024-11-17 04:37:42.457983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 56.319 ms, result 0 00:31:57.857 00:31:57.857 00:31:57.857 04:37:43 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:57.857 [2024-11-17 04:37:43.477461] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:31:57.858 [2024-11-17 04:37:43.477610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94856 ] 00:31:58.119 [2024-11-17 04:37:43.641594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.119 [2024-11-17 04:37:43.670175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.119 [2024-11-17 04:37:43.784169] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:58.119 [2024-11-17 04:37:43.784252] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:58.382 [2024-11-17 04:37:43.946351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.382 [2024-11-17 04:37:43.946426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:58.382 [2024-11-17 04:37:43.946442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:58.382 [2024-11-17 04:37:43.946451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.382 [2024-11-17 04:37:43.946512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.382 [2024-11-17 04:37:43.946525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:58.382 [2024-11-17 04:37:43.946534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:31:58.382 [2024-11-17 04:37:43.946545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.382 [2024-11-17 04:37:43.946571] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:58.382 [2024-11-17 04:37:43.947334] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:58.382 [2024-11-17 04:37:43.947422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.382 [2024-11-17 04:37:43.947434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:58.382 [2024-11-17 04:37:43.947444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:31:58.382 [2024-11-17 04:37:43.947456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.382 [2024-11-17 04:37:43.947791] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:58.382 [2024-11-17 04:37:43.947819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.382 [2024-11-17 04:37:43.947828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:58.382 [2024-11-17 04:37:43.947838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:58.382 [2024-11-17 04:37:43.947846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.382 [2024-11-17 04:37:43.947903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.382 [2024-11-17 04:37:43.947916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:58.382 [2024-11-17 04:37:43.947924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:58.382 [2024-11-17 04:37:43.947933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.382 [2024-11-17 04:37:43.948178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.382 [2024-11-17 04:37:43.948198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:58.382 [2024-11-17 04:37:43.948206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:31:58.382 [2024-11-17 04:37:43.948214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.382 [2024-11-17 04:37:43.948303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.382 [2024-11-17 04:37:43.948314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:58.382 [2024-11-17 04:37:43.948325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:31:58.382 [2024-11-17 04:37:43.948333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.382 [2024-11-17 04:37:43.948356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.382 [2024-11-17 04:37:43.948366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:58.382 [2024-11-17 04:37:43.948394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:58.382 [2024-11-17 04:37:43.948403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.382 [2024-11-17 04:37:43.948424] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:58.382 [2024-11-17 04:37:43.950522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.382 [2024-11-17 04:37:43.950557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:58.382 [2024-11-17 04:37:43.950577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:31:58.382 [2024-11-17 04:37:43.950586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.382 [2024-11-17 04:37:43.950620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.382 [2024-11-17 04:37:43.950634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:58.382 [2024-11-17 04:37:43.950643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:58.382 [2024-11-17 04:37:43.950651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.382 [2024-11-17 04:37:43.950717] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:58.382 [2024-11-17 04:37:43.950741] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:58.382 [2024-11-17 04:37:43.950785] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:58.383 [2024-11-17 04:37:43.950805] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:58.383 [2024-11-17 04:37:43.950910] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:58.383 [2024-11-17 04:37:43.950922] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:58.383 [2024-11-17 04:37:43.950935] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:58.383 [2024-11-17 04:37:43.950946] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:58.383 [2024-11-17 04:37:43.950954] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:58.383 [2024-11-17 04:37:43.950966] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:58.383 [2024-11-17 04:37:43.950974] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:58.383 [2024-11-17 04:37:43.950983] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:58.383 [2024-11-17 04:37:43.950990] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:58.383 [2024-11-17 04:37:43.951002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.383 [2024-11-17 04:37:43.951014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:58.383 [2024-11-17 04:37:43.951025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:31:58.383 [2024-11-17 04:37:43.951032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.383 [2024-11-17 04:37:43.951117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.383 [2024-11-17 04:37:43.951126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:58.383 [2024-11-17 04:37:43.951137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:58.383 [2024-11-17 04:37:43.951144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.383 [2024-11-17 04:37:43.951240] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:58.383 [2024-11-17 04:37:43.951251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:58.383 [2024-11-17 04:37:43.951259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:58.383 [2024-11-17 04:37:43.951266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.383 [2024-11-17 04:37:43.951274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:58.383 [2024-11-17 04:37:43.951280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:58.383 [2024-11-17 04:37:43.951287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:58.383 [2024-11-17 04:37:43.951304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:58.383 [2024-11-17 04:37:43.951312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:58.383 [2024-11-17 04:37:43.951321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:58.383 [2024-11-17 04:37:43.951329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:58.383 [2024-11-17 04:37:43.951338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:58.383 [2024-11-17 04:37:43.951345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:58.383 [2024-11-17 04:37:43.951352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:58.383 [2024-11-17 04:37:43.951359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:58.383 [2024-11-17 04:37:43.951366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.383 [2024-11-17 04:37:43.951412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:58.383 [2024-11-17 04:37:43.951421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:58.383 [2024-11-17 04:37:43.951428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.383 [2024-11-17 04:37:43.951436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:58.383 [2024-11-17 04:37:43.951443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:58.383 [2024-11-17 04:37:43.951450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.383 [2024-11-17 04:37:43.951457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:58.383 [2024-11-17 04:37:43.951464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:58.383 [2024-11-17 04:37:43.951470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.383 [2024-11-17 04:37:43.951481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:58.383 [2024-11-17 04:37:43.951488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:58.383 [2024-11-17 04:37:43.951495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.383 [2024-11-17 04:37:43.951501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:58.383 [2024-11-17 04:37:43.951508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:58.383 [2024-11-17 04:37:43.951514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.383 [2024-11-17 04:37:43.951521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:58.383 [2024-11-17 04:37:43.951528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:58.383 [2024-11-17 04:37:43.951534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:58.383 [2024-11-17 04:37:43.951541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:58.383 [2024-11-17 04:37:43.951547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:58.383 [2024-11-17 04:37:43.951554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:58.383 [2024-11-17 04:37:43.951560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:58.383 [2024-11-17 04:37:43.951567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:58.383 [2024-11-17 04:37:43.951575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.383 [2024-11-17 04:37:43.951583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:58.383 [2024-11-17 04:37:43.951592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:58.383 [2024-11-17 04:37:43.951599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.383 [2024-11-17 04:37:43.951607] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:58.383 [2024-11-17 04:37:43.951615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:58.383 [2024-11-17 04:37:43.951623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:58.383 [2024-11-17 04:37:43.951631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.383 [2024-11-17 04:37:43.951644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:58.383 [2024-11-17 04:37:43.951651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:58.383 [2024-11-17 04:37:43.951657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:58.383 [2024-11-17 04:37:43.951664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:58.383 [2024-11-17 04:37:43.951670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:58.383 [2024-11-17 04:37:43.951677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:58.383 [2024-11-17 04:37:43.951685] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:58.383 [2024-11-17 04:37:43.951702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:58.383 [2024-11-17 04:37:43.951710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:58.383 [2024-11-17 04:37:43.951717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:58.383 [2024-11-17 04:37:43.951727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:58.383 [2024-11-17 04:37:43.951734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:58.383 [2024-11-17 04:37:43.951741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:58.383 [2024-11-17 04:37:43.951748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:58.383 [2024-11-17 04:37:43.951754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:58.383 [2024-11-17 04:37:43.951762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:58.383 [2024-11-17 04:37:43.951769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:58.383 [2024-11-17 04:37:43.951776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:58.383 [2024-11-17 04:37:43.951783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:58.383 [2024-11-17 04:37:43.951789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:58.383 [2024-11-17 04:37:43.951796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:58.383 [2024-11-17 04:37:43.951803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:58.383 [2024-11-17 04:37:43.951810] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:58.383 [2024-11-17 04:37:43.951818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:58.383 [2024-11-17 04:37:43.951827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:58.383 [2024-11-17 04:37:43.951835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:58.383 [2024-11-17 04:37:43.951845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:58.383 [2024-11-17 04:37:43.951854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:58.383 [2024-11-17 04:37:43.951862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.383 [2024-11-17 04:37:43.951869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:58.383 [2024-11-17 04:37:43.951877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:31:58.383 [2024-11-17 04:37:43.951888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.383 [2024-11-17 04:37:43.961715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:43.961868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:58.384 [2024-11-17 04:37:43.961930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.785 ms 00:31:58.384 [2024-11-17 04:37:43.961961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:43.962062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:43.962083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:58.384 [2024-11-17 04:37:43.962104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:31:58.384 [2024-11-17 04:37:43.962123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:43.987140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:43.987481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:58.384 [2024-11-17 04:37:43.987689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.946 ms 00:31:58.384 [2024-11-17 04:37:43.987784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:43.987907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:43.987962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:58.384 [2024-11-17 04:37:43.988021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:58.384 [2024-11-17 04:37:43.988242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:43.988532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:43.988726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:58.384 [2024-11-17 04:37:43.988855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:31:58.384 [2024-11-17 04:37:43.988904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:43.989196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:43.989261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:58.384 [2024-11-17 04:37:43.989403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:31:58.384 [2024-11-17 04:37:43.989458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:43.998041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:43.998181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:58.384 [2024-11-17 04:37:43.998233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.512 ms 00:31:58.384 [2024-11-17 04:37:43.998264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:43.998445] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:58.384 [2024-11-17 04:37:43.998489] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:58.384 [2024-11-17 04:37:43.998525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:43.998545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:58.384 [2024-11-17 04:37:43.998566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:31:58.384 [2024-11-17 04:37:43.998643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:44.011002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:44.011146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:58.384 [2024-11-17 04:37:44.011202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.281 ms 00:31:58.384 [2024-11-17 04:37:44.011224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:44.011365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:44.011410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:58.384 [2024-11-17 04:37:44.011431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:31:58.384 [2024-11-17 04:37:44.011604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:44.011708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:44.011779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:58.384 [2024-11-17 04:37:44.011808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:58.384 [2024-11-17 04:37:44.011864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:44.012204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:44.012322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:58.384 [2024-11-17 04:37:44.012349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:31:58.384 [2024-11-17 04:37:44.012368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:44.012426] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:58.384 [2024-11-17 04:37:44.012524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:44.012549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:58.384 [2024-11-17 04:37:44.012590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:31:58.384 [2024-11-17 04:37:44.012610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:44.021960] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:58.384 [2024-11-17 04:37:44.022219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:44.022335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:58.384 [2024-11-17 04:37:44.022350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.454 ms 00:31:58.384 [2024-11-17 04:37:44.022359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:44.024978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:44.025015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:58.384 [2024-11-17 04:37:44.025025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.559 ms 00:31:58.384 [2024-11-17 04:37:44.025032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:44.025109] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:31:58.384 [2024-11-17 04:37:44.025717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:44.025734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:58.384 [2024-11-17 04:37:44.025744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:31:58.384 [2024-11-17 04:37:44.025755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:44.025782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:44.025790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:58.384 [2024-11-17 04:37:44.025798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:58.384 [2024-11-17 04:37:44.025805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:44.025841] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:58.384 [2024-11-17 04:37:44.025850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:44.025858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:58.384 [2024-11-17 04:37:44.025866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:58.384 [2024-11-17 04:37:44.025873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:44.031780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:44.031830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:58.384 [2024-11-17 04:37:44.031842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.887 ms 00:31:58.384 [2024-11-17 04:37:44.031850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:44.031935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.384 [2024-11-17 04:37:44.031945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:58.384 [2024-11-17 04:37:44.031961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:31:58.384 [2024-11-17 04:37:44.031969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.384 [2024-11-17 04:37:44.033164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 86.373 ms, result 0 00:31:59.774  [2024-11-17T04:37:46.447Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-17T04:37:47.394Z] Copying: 20/1024 [MB] (10 MBps) [2024-11-17T04:37:48.339Z] Copying: 31/1024 [MB] (10 MBps) [2024-11-17T04:37:49.286Z] Copying: 45/1024 [MB] (13 MBps) [2024-11-17T04:37:50.233Z] Copying: 59/1024 [MB] (13 MBps) [2024-11-17T04:37:51.622Z] Copying: 70/1024 [MB] (11 MBps) [2024-11-17T04:37:52.563Z] Copying: 81/1024 [MB] (11 MBps) [2024-11-17T04:37:53.505Z] Copying: 99/1024 [MB] (17 MBps) [2024-11-17T04:37:54.447Z] Copying: 111/1024 [MB] (12 MBps) [2024-11-17T04:37:55.390Z] Copying: 134/1024 [MB] (23 MBps) [2024-11-17T04:37:56.336Z] Copying: 147/1024 [MB] (13 MBps) [2024-11-17T04:37:57.278Z] Copying: 161/1024 [MB] (13 MBps) [2024-11-17T04:37:58.663Z] Copying: 180/1024 [MB] (18 MBps) [2024-11-17T04:37:59.235Z] Copying: 200/1024 [MB] (20 MBps) [2024-11-17T04:38:00.616Z] Copying: 216/1024 [MB] (16 MBps) [2024-11-17T04:38:01.557Z] Copying: 236/1024 [MB] (19 MBps) [2024-11-17T04:38:02.584Z] Copying: 255/1024 [MB] (19 MBps) [2024-11-17T04:38:03.553Z] Copying: 270/1024 [MB] (14 MBps) [2024-11-17T04:38:04.499Z] Copying: 284/1024 [MB] (13 MBps) [2024-11-17T04:38:05.440Z] Copying: 301/1024 [MB] (17 MBps) [2024-11-17T04:38:06.382Z] Copying: 312/1024 [MB] (11 MBps) [2024-11-17T04:38:07.327Z] Copying: 324/1024 [MB] (11 MBps) [2024-11-17T04:38:08.273Z] Copying: 335/1024 [MB] (11 MBps) [2024-11-17T04:38:09.658Z] Copying: 350/1024 [MB] (15 MBps) [2024-11-17T04:38:10.232Z] Copying: 369/1024 [MB] (18 MBps) [2024-11-17T04:38:11.621Z] Copying: 385/1024 [MB] (15 MBps) [2024-11-17T04:38:12.566Z] Copying: 408/1024 [MB] (23 MBps) [2024-11-17T04:38:13.511Z] Copying: 428/1024 [MB] (20 MBps) [2024-11-17T04:38:14.456Z] Copying: 439/1024 [MB] (10 MBps) [2024-11-17T04:38:15.400Z] Copying: 449/1024 [MB] (10 MBps) [2024-11-17T04:38:16.346Z] Copying: 460/1024 [MB] (11 MBps) [2024-11-17T04:38:17.290Z] Copying: 471/1024 [MB] (10 MBps) [2024-11-17T04:38:18.233Z] Copying: 482/1024 [MB] (10 MBps) [2024-11-17T04:38:19.619Z] Copying: 492/1024 [MB] (10 MBps) [2024-11-17T04:38:20.559Z] Copying: 508/1024 [MB] (15 MBps) [2024-11-17T04:38:21.499Z] Copying: 518/1024 [MB] (10 MBps) [2024-11-17T04:38:22.444Z] Copying: 546/1024 [MB] (27 MBps) [2024-11-17T04:38:23.388Z] Copying: 563/1024 [MB] (16 MBps) [2024-11-17T04:38:24.329Z] Copying: 573/1024 [MB] (10 MBps) [2024-11-17T04:38:25.273Z] Copying: 584/1024 [MB] (10 MBps) [2024-11-17T04:38:26.657Z] Copying: 595/1024 [MB] (10 MBps) [2024-11-17T04:38:27.230Z] Copying: 611/1024 [MB] (16 MBps) [2024-11-17T04:38:28.617Z] Copying: 622/1024 [MB] (10 MBps) [2024-11-17T04:38:29.562Z] Copying: 636/1024 [MB] (13 MBps) [2024-11-17T04:38:30.506Z] Copying: 651/1024 [MB] (14 MBps) [2024-11-17T04:38:31.449Z] Copying: 662/1024 [MB] (11 MBps) [2024-11-17T04:38:32.423Z] Copying: 681/1024 [MB] (19 MBps) [2024-11-17T04:38:33.362Z] Copying: 704/1024 [MB] (22 MBps) [2024-11-17T04:38:34.334Z] Copying: 726/1024 [MB] (21 MBps) [2024-11-17T04:38:35.300Z] Copying: 746/1024 [MB] (20 MBps) [2024-11-17T04:38:36.246Z] Copying: 766/1024 [MB] (19 MBps) [2024-11-17T04:38:37.627Z] Copying: 781/1024 [MB] (14 MBps) [2024-11-17T04:38:38.569Z] Copying: 797/1024 [MB] (16 MBps) [2024-11-17T04:38:39.513Z] Copying: 812/1024 [MB] (14 MBps) [2024-11-17T04:38:40.451Z] Copying: 826/1024 [MB] (14 MBps) [2024-11-17T04:38:41.395Z] Copying: 843/1024 [MB] (16 MBps) [2024-11-17T04:38:42.336Z] Copying: 854/1024 [MB] (11 MBps) [2024-11-17T04:38:43.275Z] Copying: 876/1024 [MB] (21 MBps) [2024-11-17T04:38:44.664Z] Copying: 891/1024 [MB] (15 MBps) [2024-11-17T04:38:45.237Z] Copying: 903/1024 [MB] (12 MBps) [2024-11-17T04:38:46.625Z] Copying: 916/1024 [MB] (12 MBps) [2024-11-17T04:38:47.567Z] Copying: 936/1024 [MB] (19 MBps) [2024-11-17T04:38:48.509Z] Copying: 958/1024 [MB] (21 MBps) [2024-11-17T04:38:49.453Z] Copying: 976/1024 [MB] (18 MBps) [2024-11-17T04:38:50.392Z] Copying: 989/1024 [MB] (12 MBps) [2024-11-17T04:38:51.330Z] Copying: 1004/1024 [MB] (15 MBps) [2024-11-17T04:38:51.902Z] Copying: 1018/1024 [MB] (14 MBps) [2024-11-17T04:38:51.902Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-17 04:38:51.713819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.175 [2024-11-17 04:38:51.714231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:06.175 [2024-11-17 04:38:51.714262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:06.175 [2024-11-17 04:38:51.714271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.175 [2024-11-17 04:38:51.714306] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:06.175 [2024-11-17 04:38:51.715431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.175 [2024-11-17 04:38:51.715465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:06.175 [2024-11-17 04:38:51.715478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:33:06.175 [2024-11-17 04:38:51.715487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.175 [2024-11-17 04:38:51.715717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.175 [2024-11-17 04:38:51.715739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:06.175 [2024-11-17 04:38:51.715749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:33:06.175 [2024-11-17 04:38:51.715758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.175 [2024-11-17 04:38:51.715786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.175 [2024-11-17 04:38:51.715796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:06.175 [2024-11-17 04:38:51.715804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:06.175 [2024-11-17 04:38:51.715812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.175 [2024-11-17 04:38:51.715873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.175 [2024-11-17 04:38:51.715885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:06.175 [2024-11-17 04:38:51.715894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:33:06.175 [2024-11-17 04:38:51.715901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.175 [2024-11-17 04:38:51.715915] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:06.175 [2024-11-17 04:38:51.715928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:06.175 [2024-11-17 04:38:51.715943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:06.175 [2024-11-17 04:38:51.715950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:06.175 [2024-11-17 04:38:51.715958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:06.175 [2024-11-17 04:38:51.715966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.715973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.715981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.715988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.715995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:06.176 [2024-11-17 04:38:51.716692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:06.177 [2024-11-17 04:38:51.716910] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:06.177 [2024-11-17 04:38:51.716918] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d54227a3-f4e5-474b-a0e2-73feee6cf014 00:33:06.177 [2024-11-17 04:38:51.716925] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:06.177 [2024-11-17 04:38:51.716933] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4384 00:33:06.177 [2024-11-17 04:38:51.716940] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4352 00:33:06.177 [2024-11-17 04:38:51.716948] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:33:06.177 [2024-11-17 04:38:51.716960] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:06.177 [2024-11-17 04:38:51.716975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:06.177 [2024-11-17 04:38:51.716983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:06.177 [2024-11-17 04:38:51.716989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:06.177 [2024-11-17 04:38:51.716995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:06.177 [2024-11-17 04:38:51.717002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.177 [2024-11-17 04:38:51.717013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:06.177 [2024-11-17 04:38:51.717024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:33:06.177 [2024-11-17 04:38:51.717031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.177 [2024-11-17 04:38:51.719660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.177 [2024-11-17 04:38:51.719702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:06.177 [2024-11-17 04:38:51.719716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:33:06.177 [2024-11-17 04:38:51.719724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.177 [2024-11-17 04:38:51.719838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:06.177 [2024-11-17 04:38:51.719846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:06.177 [2024-11-17 04:38:51.719855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:33:06.177 [2024-11-17 04:38:51.719862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.177 [2024-11-17 04:38:51.729496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:06.177 [2024-11-17 04:38:51.729673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:06.177 [2024-11-17 04:38:51.729733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:06.177 [2024-11-17 04:38:51.729757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.177 [2024-11-17 04:38:51.729845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:06.177 [2024-11-17 04:38:51.729868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:06.177 [2024-11-17 04:38:51.729888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:06.177 [2024-11-17 04:38:51.729907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.177 [2024-11-17 04:38:51.729982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:06.177 [2024-11-17 04:38:51.730068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:06.177 [2024-11-17 04:38:51.730101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:06.177 [2024-11-17 04:38:51.730120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.177 [2024-11-17 04:38:51.730149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:06.177 [2024-11-17 04:38:51.730170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:06.177 [2024-11-17 04:38:51.730189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:06.177 [2024-11-17 04:38:51.730207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.177 [2024-11-17 04:38:51.746769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:06.177 [2024-11-17 04:38:51.746974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:06.177 [2024-11-17 04:38:51.747062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:06.177 [2024-11-17 04:38:51.747086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.177 [2024-11-17 04:38:51.758506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:06.177 [2024-11-17 04:38:51.758677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:06.177 [2024-11-17 04:38:51.758733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:06.177 [2024-11-17 04:38:51.758756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.177 [2024-11-17 04:38:51.758848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:06.177 [2024-11-17 04:38:51.758873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:06.177 [2024-11-17 04:38:51.758894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:06.177 [2024-11-17 04:38:51.758918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.177 [2024-11-17 04:38:51.758972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:06.177 [2024-11-17 04:38:51.758993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:06.177 [2024-11-17 04:38:51.759014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:06.177 [2024-11-17 04:38:51.759089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.177 [2024-11-17 04:38:51.759175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:06.177 [2024-11-17 04:38:51.759206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:06.177 [2024-11-17 04:38:51.759225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:06.177 [2024-11-17 04:38:51.759243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.177 [2024-11-17 04:38:51.759290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:06.177 [2024-11-17 04:38:51.759317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:06.177 [2024-11-17 04:38:51.759338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:06.177 [2024-11-17 04:38:51.759356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.177 [2024-11-17 04:38:51.759422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:06.177 [2024-11-17 04:38:51.759503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:06.177 [2024-11-17 04:38:51.759526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:06.177 [2024-11-17 04:38:51.759546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.177 [2024-11-17 04:38:51.759615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:06.177 [2024-11-17 04:38:51.759640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:06.177 [2024-11-17 04:38:51.759659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:06.177 [2024-11-17 04:38:51.759683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:06.177 [2024-11-17 04:38:51.759840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 45.983 ms, result 0 00:33:06.439 00:33:06.439 00:33:06.439 04:38:51 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:08.986 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:08.986 Process with pid 92625 is not found 00:33:08.986 Remove shared memory files 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92625 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 92625 ']' 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 92625 00:33:08.986 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (92625) - No such process 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 92625 is not found' 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_d54227a3-f4e5-474b-a0e2-73feee6cf014_band_md /dev/hugepages/ftl_d54227a3-f4e5-474b-a0e2-73feee6cf014_l2p_l1 /dev/hugepages/ftl_d54227a3-f4e5-474b-a0e2-73feee6cf014_l2p_l2 /dev/hugepages/ftl_d54227a3-f4e5-474b-a0e2-73feee6cf014_l2p_l2_ctx /dev/hugepages/ftl_d54227a3-f4e5-474b-a0e2-73feee6cf014_nvc_md /dev/hugepages/ftl_d54227a3-f4e5-474b-a0e2-73feee6cf014_p2l_pool /dev/hugepages/ftl_d54227a3-f4e5-474b-a0e2-73feee6cf014_sb /dev/hugepages/ftl_d54227a3-f4e5-474b-a0e2-73feee6cf014_sb_shm /dev/hugepages/ftl_d54227a3-f4e5-474b-a0e2-73feee6cf014_trim_bitmap /dev/hugepages/ftl_d54227a3-f4e5-474b-a0e2-73feee6cf014_trim_log /dev/hugepages/ftl_d54227a3-f4e5-474b-a0e2-73feee6cf014_trim_md /dev/hugepages/ftl_d54227a3-f4e5-474b-a0e2-73feee6cf014_vmap 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:08.986 ************************************ 00:33:08.986 END TEST ftl_restore_fast 00:33:08.986 ************************************ 00:33:08.986 00:33:08.986 real 4m48.828s 00:33:08.986 user 4m36.312s 00:33:08.986 sys 0m12.266s 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:08.986 04:38:54 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:08.986 Process with pid 83719 is not found 00:33:08.986 04:38:54 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:08.986 04:38:54 ftl -- ftl/ftl.sh@14 -- # killprocess 83719 00:33:08.986 04:38:54 ftl -- common/autotest_common.sh@954 -- # '[' -z 83719 ']' 00:33:08.986 04:38:54 ftl -- common/autotest_common.sh@958 -- # kill -0 83719 00:33:08.986 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83719) - No such process 00:33:08.986 04:38:54 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 83719 is not found' 00:33:08.986 04:38:54 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:08.986 04:38:54 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=95603 00:33:08.986 04:38:54 ftl -- ftl/ftl.sh@20 -- # waitforlisten 95603 00:33:08.986 04:38:54 ftl -- common/autotest_common.sh@835 -- # '[' -z 95603 ']' 00:33:08.987 04:38:54 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.987 04:38:54 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:08.987 04:38:54 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.987 04:38:54 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:08.987 04:38:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:08.987 04:38:54 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:08.987 [2024-11-17 04:38:54.555753] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:33:08.987 [2024-11-17 04:38:54.556182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95603 ] 00:33:09.247 [2024-11-17 04:38:54.723601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.247 [2024-11-17 04:38:54.753083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.816 04:38:55 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.816 04:38:55 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:09.816 04:38:55 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:10.076 nvme0n1 00:33:10.076 04:38:55 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:10.076 04:38:55 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:10.076 04:38:55 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:10.336 04:38:55 ftl -- ftl/common.sh@28 -- # stores=3219cdd0-f47b-41d1-83ec-79a673b87137 00:33:10.337 04:38:55 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:10.337 04:38:55 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3219cdd0-f47b-41d1-83ec-79a673b87137 00:33:10.597 04:38:56 ftl -- ftl/ftl.sh@23 -- # killprocess 95603 00:33:10.597 04:38:56 ftl -- common/autotest_common.sh@954 -- # '[' -z 95603 ']' 00:33:10.597 04:38:56 ftl -- common/autotest_common.sh@958 -- # kill -0 95603 00:33:10.597 04:38:56 ftl -- common/autotest_common.sh@959 -- # uname 00:33:10.597 04:38:56 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:10.597 04:38:56 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95603 00:33:10.597 killing process with pid 95603 00:33:10.597 04:38:56 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:10.597 04:38:56 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:10.597 04:38:56 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95603' 00:33:10.597 04:38:56 ftl -- common/autotest_common.sh@973 -- # kill 95603 00:33:10.597 04:38:56 ftl -- common/autotest_common.sh@978 -- # wait 95603 00:33:10.858 04:38:56 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:11.119 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:11.119 Waiting for block devices as requested 00:33:11.119 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:11.381 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:11.381 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:11.381 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:16.672 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:16.672 Remove shared memory files 00:33:16.672 04:39:02 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:16.672 04:39:02 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:16.672 04:39:02 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:16.672 04:39:02 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:16.672 04:39:02 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:16.672 04:39:02 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:16.672 04:39:02 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:16.672 ************************************ 00:33:16.672 END TEST ftl 00:33:16.672 ************************************ 00:33:16.672 00:33:16.672 real 17m49.345s 00:33:16.672 user 19m37.455s 00:33:16.672 sys 1m36.968s 00:33:16.672 04:39:02 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:16.672 04:39:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:16.672 04:39:02 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:16.672 04:39:02 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:16.672 04:39:02 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:16.672 04:39:02 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:16.672 04:39:02 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:16.672 04:39:02 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:16.672 04:39:02 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:16.672 04:39:02 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:16.672 04:39:02 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:16.672 04:39:02 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:16.672 04:39:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.672 04:39:02 -- common/autotest_common.sh@10 -- # set +x 00:33:16.672 04:39:02 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:16.672 04:39:02 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:16.672 04:39:02 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:16.672 04:39:02 -- common/autotest_common.sh@10 -- # set +x 00:33:18.057 INFO: APP EXITING 00:33:18.057 INFO: killing all VMs 00:33:18.057 INFO: killing vhost app 00:33:18.057 INFO: EXIT DONE 00:33:18.319 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:18.891 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:18.891 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:18.891 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:18.891 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:19.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:19.724 Cleaning 00:33:19.724 Removing: /var/run/dpdk/spdk0/config 00:33:19.724 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:19.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:19.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:19.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:19.725 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:19.725 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:19.725 Removing: /var/run/dpdk/spdk0 00:33:19.725 Removing: /var/run/dpdk/spdk_pid69224 00:33:19.725 Removing: /var/run/dpdk/spdk_pid69382 00:33:19.725 Removing: /var/run/dpdk/spdk_pid69584 00:33:19.725 Removing: /var/run/dpdk/spdk_pid69671 00:33:19.725 Removing: /var/run/dpdk/spdk_pid69694 00:33:19.725 Removing: /var/run/dpdk/spdk_pid69806 00:33:19.725 Removing: /var/run/dpdk/spdk_pid69824 00:33:19.725 Removing: /var/run/dpdk/spdk_pid70001 00:33:19.725 Removing: /var/run/dpdk/spdk_pid70074 00:33:19.725 Removing: /var/run/dpdk/spdk_pid70154 00:33:19.725 Removing: /var/run/dpdk/spdk_pid70248 00:33:19.725 Removing: /var/run/dpdk/spdk_pid70329 00:33:19.725 Removing: /var/run/dpdk/spdk_pid70363 00:33:19.725 Removing: /var/run/dpdk/spdk_pid70399 00:33:19.725 Removing: /var/run/dpdk/spdk_pid70470 00:33:19.725 Removing: /var/run/dpdk/spdk_pid70543 00:33:19.725 Removing: /var/run/dpdk/spdk_pid70962 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71010 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71051 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71067 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71125 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71141 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71199 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71215 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71257 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71275 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71317 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71335 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71462 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71504 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71582 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71743 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71816 00:33:19.725 Removing: /var/run/dpdk/spdk_pid71847 00:33:19.725 Removing: /var/run/dpdk/spdk_pid72269 00:33:19.725 Removing: /var/run/dpdk/spdk_pid72356 00:33:19.725 Removing: /var/run/dpdk/spdk_pid72456 00:33:19.725 Removing: /var/run/dpdk/spdk_pid72497 00:33:19.725 Removing: /var/run/dpdk/spdk_pid72518 00:33:19.725 Removing: /var/run/dpdk/spdk_pid72591 00:33:19.725 Removing: /var/run/dpdk/spdk_pid73208 00:33:19.725 Removing: /var/run/dpdk/spdk_pid73236 00:33:19.725 Removing: /var/run/dpdk/spdk_pid73691 00:33:19.725 Removing: /var/run/dpdk/spdk_pid73789 00:33:19.725 Removing: /var/run/dpdk/spdk_pid73887 00:33:19.725 Removing: /var/run/dpdk/spdk_pid73929 00:33:19.725 Removing: /var/run/dpdk/spdk_pid73955 00:33:19.725 Removing: /var/run/dpdk/spdk_pid73975 00:33:19.725 Removing: /var/run/dpdk/spdk_pid75805 00:33:19.725 Removing: /var/run/dpdk/spdk_pid75920 00:33:19.725 Removing: /var/run/dpdk/spdk_pid75930 00:33:19.725 Removing: /var/run/dpdk/spdk_pid75942 00:33:19.725 Removing: /var/run/dpdk/spdk_pid75987 00:33:19.725 Removing: /var/run/dpdk/spdk_pid75991 00:33:19.725 Removing: /var/run/dpdk/spdk_pid76003 00:33:19.986 Removing: /var/run/dpdk/spdk_pid76048 00:33:19.986 Removing: /var/run/dpdk/spdk_pid76052 00:33:19.986 Removing: /var/run/dpdk/spdk_pid76064 00:33:19.986 Removing: /var/run/dpdk/spdk_pid76109 00:33:19.986 Removing: /var/run/dpdk/spdk_pid76113 00:33:19.986 Removing: /var/run/dpdk/spdk_pid76125 00:33:19.986 Removing: /var/run/dpdk/spdk_pid77485 00:33:19.986 Removing: /var/run/dpdk/spdk_pid77566 00:33:19.986 Removing: /var/run/dpdk/spdk_pid78961 00:33:19.986 Removing: /var/run/dpdk/spdk_pid80330 00:33:19.986 Removing: /var/run/dpdk/spdk_pid80385 00:33:19.986 Removing: /var/run/dpdk/spdk_pid80439 00:33:19.986 Removing: /var/run/dpdk/spdk_pid80493 00:33:19.986 Removing: /var/run/dpdk/spdk_pid80576 00:33:19.986 Removing: /var/run/dpdk/spdk_pid80641 00:33:19.986 Removing: /var/run/dpdk/spdk_pid80779 00:33:19.986 Removing: /var/run/dpdk/spdk_pid81125 00:33:19.986 Removing: /var/run/dpdk/spdk_pid81149 00:33:19.986 Removing: /var/run/dpdk/spdk_pid81581 00:33:19.986 Removing: /var/run/dpdk/spdk_pid81754 00:33:19.986 Removing: /var/run/dpdk/spdk_pid81838 00:33:19.986 Removing: /var/run/dpdk/spdk_pid81942 00:33:19.986 Removing: /var/run/dpdk/spdk_pid81986 00:33:19.986 Removing: /var/run/dpdk/spdk_pid82012 00:33:19.986 Removing: /var/run/dpdk/spdk_pid82318 00:33:19.986 Removing: /var/run/dpdk/spdk_pid82356 00:33:19.986 Removing: /var/run/dpdk/spdk_pid82412 00:33:19.986 Removing: /var/run/dpdk/spdk_pid82781 00:33:19.986 Removing: /var/run/dpdk/spdk_pid82920 00:33:19.986 Removing: /var/run/dpdk/spdk_pid83719 00:33:19.986 Removing: /var/run/dpdk/spdk_pid83840 00:33:19.986 Removing: /var/run/dpdk/spdk_pid83987 00:33:19.986 Removing: /var/run/dpdk/spdk_pid84084 00:33:19.986 Removing: /var/run/dpdk/spdk_pid84359 00:33:19.986 Removing: /var/run/dpdk/spdk_pid84590 00:33:19.986 Removing: /var/run/dpdk/spdk_pid84931 00:33:19.986 Removing: /var/run/dpdk/spdk_pid85096 00:33:19.986 Removing: /var/run/dpdk/spdk_pid85317 00:33:19.986 Removing: /var/run/dpdk/spdk_pid85353 00:33:19.986 Removing: /var/run/dpdk/spdk_pid85543 00:33:19.986 Removing: /var/run/dpdk/spdk_pid85557 00:33:19.986 Removing: /var/run/dpdk/spdk_pid85599 00:33:19.986 Removing: /var/run/dpdk/spdk_pid85851 00:33:19.986 Removing: /var/run/dpdk/spdk_pid86065 00:33:19.986 Removing: /var/run/dpdk/spdk_pid86686 00:33:19.986 Removing: /var/run/dpdk/spdk_pid87405 00:33:19.986 Removing: /var/run/dpdk/spdk_pid88138 00:33:19.986 Removing: /var/run/dpdk/spdk_pid88929 00:33:19.986 Removing: /var/run/dpdk/spdk_pid89072 00:33:19.986 Removing: /var/run/dpdk/spdk_pid89152 00:33:19.986 Removing: /var/run/dpdk/spdk_pid89797 00:33:19.986 Removing: /var/run/dpdk/spdk_pid89844 00:33:19.986 Removing: /var/run/dpdk/spdk_pid90368 00:33:19.986 Removing: /var/run/dpdk/spdk_pid90903 00:33:19.986 Removing: /var/run/dpdk/spdk_pid91689 00:33:19.986 Removing: /var/run/dpdk/spdk_pid91804 00:33:19.986 Removing: /var/run/dpdk/spdk_pid91836 00:33:19.986 Removing: /var/run/dpdk/spdk_pid91893 00:33:19.986 Removing: /var/run/dpdk/spdk_pid91943 00:33:19.986 Removing: /var/run/dpdk/spdk_pid91999 00:33:19.986 Removing: /var/run/dpdk/spdk_pid92186 00:33:19.986 Removing: /var/run/dpdk/spdk_pid92255 00:33:19.986 Removing: /var/run/dpdk/spdk_pid92320 00:33:19.986 Removing: /var/run/dpdk/spdk_pid92404 00:33:19.986 Removing: /var/run/dpdk/spdk_pid92438 00:33:19.986 Removing: /var/run/dpdk/spdk_pid92495 00:33:19.986 Removing: /var/run/dpdk/spdk_pid92625 00:33:19.986 Removing: /var/run/dpdk/spdk_pid92842 00:33:19.986 Removing: /var/run/dpdk/spdk_pid93388 00:33:19.986 Removing: /var/run/dpdk/spdk_pid94202 00:33:19.986 Removing: /var/run/dpdk/spdk_pid94856 00:33:19.986 Removing: /var/run/dpdk/spdk_pid95603 00:33:19.986 Clean 00:33:19.986 04:39:05 -- common/autotest_common.sh@1453 -- # return 0 00:33:19.986 04:39:05 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:19.986 04:39:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:19.986 04:39:05 -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 04:39:05 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:20.247 04:39:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:20.247 04:39:05 -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 04:39:05 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:20.247 04:39:05 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:20.247 04:39:05 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:20.247 04:39:05 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:20.247 04:39:05 -- spdk/autotest.sh@398 -- # hostname 00:33:20.247 04:39:05 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:20.507 geninfo: WARNING: invalid characters removed from testname! 00:33:47.169 04:39:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:49.078 04:39:34 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:51.629 04:39:37 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:54.182 04:39:39 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:56.731 04:39:41 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:59.279 04:39:44 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:01.186 04:39:46 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:01.187 04:39:46 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:01.187 04:39:46 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:01.187 04:39:46 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:01.187 04:39:46 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:01.187 04:39:46 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:01.187 + [[ -n 5768 ]] 00:34:01.187 + sudo kill 5768 00:34:01.197 [Pipeline] } 00:34:01.216 [Pipeline] // timeout 00:34:01.222 [Pipeline] } 00:34:01.241 [Pipeline] // stage 00:34:01.248 [Pipeline] } 00:34:01.266 [Pipeline] // catchError 00:34:01.278 [Pipeline] stage 00:34:01.282 [Pipeline] { (Stop VM) 00:34:01.296 [Pipeline] sh 00:34:01.584 + vagrant halt 00:34:04.123 ==> default: Halting domain... 00:34:10.718 [Pipeline] sh 00:34:10.999 + vagrant destroy -f 00:34:13.541 ==> default: Removing domain... 00:34:14.123 [Pipeline] sh 00:34:14.408 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:14.419 [Pipeline] } 00:34:14.436 [Pipeline] // stage 00:34:14.443 [Pipeline] } 00:34:14.458 [Pipeline] // dir 00:34:14.464 [Pipeline] } 00:34:14.480 [Pipeline] // wrap 00:34:14.487 [Pipeline] } 00:34:14.500 [Pipeline] // catchError 00:34:14.510 [Pipeline] stage 00:34:14.512 [Pipeline] { (Epilogue) 00:34:14.526 [Pipeline] sh 00:34:14.811 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:20.130 [Pipeline] catchError 00:34:20.132 [Pipeline] { 00:34:20.147 [Pipeline] sh 00:34:20.436 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:20.436 Artifacts sizes are good 00:34:20.447 [Pipeline] } 00:34:20.462 [Pipeline] // catchError 00:34:20.473 [Pipeline] archiveArtifacts 00:34:20.481 Archiving artifacts 00:34:20.597 [Pipeline] cleanWs 00:34:20.613 [WS-CLEANUP] Deleting project workspace... 00:34:20.613 [WS-CLEANUP] Deferred wipeout is used... 00:34:20.621 [WS-CLEANUP] done 00:34:20.623 [Pipeline] } 00:34:20.640 [Pipeline] // stage 00:34:20.646 [Pipeline] } 00:34:20.660 [Pipeline] // node 00:34:20.665 [Pipeline] End of Pipeline 00:34:20.722 Finished: SUCCESS